The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
MidrashBot exemplifies the workings of experimental ‘faith bots’
By Shanen Boettcher  |  Aug 30, 2023
MidrashBot exemplifies the workings of experimental ‘faith bots’
Image courtesy of and under license from Shutterstock.com
AI&F created the GAI MidrashBot experimental faith chatbot. The Yuan asked ex-Microsoft exec Shanen Boettcher, who has a PhD in human-machine communication and AI from the Univ. of St Andrews, and Jeremy Kirshbaum, co-founder of innovation consultancy Handshake, how it works.

SEATTLE - AI&F research fellow Shanen Boettcher (SB) and Jeremy Kirshbaum (JK), co-founder of innovation consultancy Handshake, recently created an experimental faith chatbot with a knowledge base rooted in the Babylonian Talmud. This chatbot, known as MidrashBot, is powered by large language models from OpenAI and AI21. All of the code has been open-sourced, along with the dataset here. The edition of the Babylonian Talmud used is an English translation by Michael L. Rodkinson and The Talmud Society in Boston that was published in 1918.

Anyone can try out a public version of the chatbot themselves by asking it questions here.

In this exclusive interview, The Yuan asks Boettcher and Kirshbaum some questions about the why, how, and what of their MidrashBot experiment and what they hope to learn.

The Yuan: Jeremy, what is MidrashBot?

JK: MidrashBot is an experiment to explore the meaning of ‘truth’ and ‘bias’ in AI chatbots, using the Babylonian Talmud as an example. The Talmud is considered by some to be the canon interpretation of the Torah, which is in turn considered to be the source of absolute truth.

The Talmud provides several useful characteristics to explore questions of bias and truth in AI [artificial intelligence] chatbots:

- Who are appropriate validators of truth - e.g., religious officials, the ‘crowd,’ ourselves?

- Do we situate bias in a generative model in its ability to describe its underlying distribution, or its accordance with proper moral effect?

- The Talmud is an explicit ‘source of truth’ in that its underlying values are directly declared. How does this help us examine the

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.