The lecture was well attended and there was great interest in Matei Zaharia‘s presentation. Matei Zaharia is an associate professor at Stanford CS, where he works on computer systems and machine learning as part of the Stanford DAWN Project. Additionally, he also co-founded Databricks, a data and AI startup. A few weeks ago, the company released Dolly 2.0—the first open source, instruction-following LLM that is fine-tuned on a human-generated instruction dataset and licensed for research and commercial use.
Matei Zaharia began his presentation by outlining the starting point of this idea. He explained the advantages of Large Language Models: “LLMs revolutionize every user interface and analyze unstructured text data.“ Now, the aim should be to make such models more accessible. One solution could be Dolly 2.0, which is an open-source model. This means, that any organization can create, own and customize powerful LLMs that can talk to people, without paying for API access or sharing data with third parties.
Following the presentation, the audience had the opportunity to ask follow-up questions, covering a variety of topics: How does Dolly function under certain conditions? Does Dolly use certain filters – and is there any content moderation? How does Dolly behave with different languages? What is the future of free and open knowledge databases such as wikidata? How much data is necessary for Dolly to give reliable results? Watch Zaharia’s answers and his entire presentation here.
This was the first KISZ-Talk of the AI Service Center at the Hasso Plattner Institute. The Service Center aims to facilitate and improve general access to the key technology of AI. The team of KISZ wants to reduce inhibition thresholds towards the use of artificial intelligence in business and society. It is also a contribution to strengthen AI research and AI knowledge transfer in the Berlin and Brandenburg area.