Open Artificial Intelligence, Safe Superintelligence, and its Competing Competing Academies: Zhai Beyer, Kolesnikov, and Sutskever
“Zhai, Beyer, and Kolesnikov all live in the city of Zurich,” according to professional networking website, LinkedIn, which has become a prominent European tech hub. The city is home to a public research university with a renowned computer science department. The Financial Times reported earlier this year that Apple and a group of experts from Google were working at a secretive European laboratory.
In October, Open Artificial Intelligence said that it was working on expanding. The company wants to open new offices in New York City, Seattle, Paris, Singapore, London, Tokyo, and other cities, as well as existing offices in San Francisco.
Over the past few months, a number of key figures at OpenAI have left the company, either to join direct competitors like DeepMind and Anthropic or launch their own ventures. Safe Superintelligence is a startup focused on the safety and security of artificial intelligence, which is what Ilya Sutskever left to do. Mira Murati, OpenAI’s former chief technology officer, announced she was leaving the company in September and is reportedly raising money for a new AI venture.
As they race to develop the most advanced AI models, OpenAI and its rivals are intensely competing to hire a limited pool of top researchers from around the world, often offering them annual compensation packages worth close to seven figures or more. Hopping between companies is not uncommon for the most sought-after talent.
All three of the newly hired researchers already work closely together, according to Beyer’s personal website. While he worked at DeepMind, Beyer appears to have kept a close eye on the research that OpenAI was publishing and public controversies the company was embroiled in, which he frequently posted about to his more than 70,000 followers on X. When CEO Sam Altman was briefly ousted from OpenAI by its board of directors last year, Beyer posted that “the most sensible” explanation for the firing he had read so far was that Altman was involved in too many other startups at the same time.
The first version of OpenAI’s text to image platform, Dall-E, was released in 2021. Its flagship chatbot ChatGPT, however, was initially only capable of interacting with text inputs. The company later added voice and image features as multimodal functionality became an increasingly important part of its product line and AI research. The latest version of Dall-E is available within the site. OpenAI has also developed a highly anticipated generative AI video product called Sora, though it has yet to make it widely available.
OpenAI, maker of ChatGPT and one of the most prominent artificial intelligence companies in the world, said today that it has entered a partnership with Anduril, a defense startup that makes missiles, drones, and software for the United States military. It marks the latest in a series of similar announcements made recently by major tech companies in Silicon Valley, which has warmed to forming closer ties with the defense industry.
A former OpenAI employee said that the company’s technology would be used to assess drones threats more quickly, and give operators information they need to make better decisions while staying out of harm’s way.
A few years ago, many AI researchers in Silicon Valley were firmly opposed to working with the military. In 2018, thousands of Google employees staged protests over the company supplying AI to the US Department of Defense through what was then known within the Pentagon as Project Maven. Google later backed out of the project.
A swarm of small, autonomously piloted aircraft are playing a role in Anduril’s advanced air defense system. These aircraft are controlled through an interface powered by a large language model, which interprets natural language commands and translates them into instructions that both human pilots and the drones can understand and execute. Until now, Anduril has been using models from open source.
It is not known if anduril will use advanced artificial intelligence to control its systems or if they will make their own decisions. Such a move would be more risky, particularly given the unpredictability of today’s models.