Uncategorized

Rage against machine learning that is being driven by profit

Artificial Intelligence and the Industrial Advantages of Research: Why Universities and Academics Invest in AI Research and Development? An Analysis by Daniel Acua and Marko Vallor

That growing dominance of the outputs of AI research is largely a result of industry’s massive advantage in funding. In 2021, US government agencies (excluding the Department of Defense) spent US$1.5 billion on AI research and development, and the European Commission spent €1 billion (US$1.1 billion). The industry spent more than US$400 billion.

Whatever approach is taken, keeping publicly funded, independent academic researchers at the forefront of AI progress is crucial for the safe development of the technology, says Vallor. She says that it has the potential to be very dangerous if it is not developed correctly and if it is not used in a responsible way. The commercial incentives are the only ones that are driving the bus.

Companies that develop and deploy AI responsibly could face a lighter tax burden, she suggests. Vallor said that those that don’t want to adopt standards should pay to make up for the loss of income.

Academics need open access to the technology and code that underpins commercial models in order to scrutinize them. “Nobody, not even the best experts, can just look at a complex neural network and figure out exactly how it works,” says Hoos. We don’t know much about the systems, so we need to know as much as possible about how they are created.

Theis says companies want more people to be able to work with their artificial intelligence models so they’re moving towards open access. He says that the interest of industry is to have people trained on their tools. The parent company of Facebook, Meta, wants to better compete with the likes of OpenAI and Google because it wants to have better access to data. Giving people access to its models will allow an inflow of new, creative ideas, says Daniel Acuña, a computer scientist at the University of Colorado Boulder.

But it is unrealistic to expect that companies will give away all of their “secret sauce”, says Hoos — another reason it is important that academia retains the capability, in both technology and talent, to keep up with industry developments.

Acuña and his colleagues have studied the different approaches of industry and academic researchers to AI3. They analysed papers presented at a variety of AI conferences between 1995 and 2020 to see how the composition of a research team was related to the novelty of their work, and its impact in terms of citations and models created.

The UK Artificial Intelligence Strategy and Budget: The Case for the Confederation of Laboratories and Innovation Research (CLAIRE)

To make the most of that freedom, academics will need some form of funding. “A strong investment into basic research more broadly, so it is not just happening in a few eclectic places, would be useful,” says Theis.

Although governments are unlikely to be able to match the huge amounts of money being splashed around by industry, smaller, more focused investments can have outsized influence. “Canada has a very effective strategy for Artificial Intelligence, which has not cost a lot of money.” The country has invested more than Can$2 billion in artificial intelligence initiatives since 2016 and plans to spend up to Can$2.4 billion over the next few years. Much of the money is for helping researchers access the computing power they need to do their job, for supporting responsible research and for recruiting and retaining top talent. This strategy has contributed to Canada’s ability to punch above its weight and remain near the top of the global leaderboard in both academic research and commercial development. It was 9th in natural sciences overall and 7th in the world for Nature Index output in artificial intelligence research.

The Confederation of Laboratories for Artificial Intelligence Research, also known as theCLAIRE, has a plan that is even more ambitious. The plan is inspired by the approach in physical sciences of sharing large, expensive facilities across institutions and even countries. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”

Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. Theis, a computational Biologist at Helmholtz Munich in Germany, says that training state of the art large language models for natural-language processing will be hard to keep up with.

The adoption of the Artificial Intelligence (AI) Act in the European Union (EU) this year has triggered speculation about the potential for a ‘Brussels effect’: when EU regulation has a global impact as companies adopt the rules to make it easier to operate internationally, or new laws elsewhere are based on the EU’s approach. The ways in which the General Data Protection Regulation (GDPR) — the EU’s rules on data privacy — influenced state-level legislation and corporate self-governance in the United States is a prime example of how this can happen, particularly when federal legislation is stalled and states take the lead, which is where US AI governance is today.

The Connecticut and Colorado bills are quite similar in that they both require companies to create documentation when creating high-riskai systems, but Workday’s model bill for developing systems for workforce was different. The Colorado and Connecticut bills are just like the Workday document, which is structured around obligations of developers and deployers, regulates systems used in consequential decisions and was shared by The Record in March. The Workday draft bill suggests that an assessment be produced along with the proposals for artificial intelligence systems. The Workday document also contains language similar to bills introduced in California, Illinois, New York, Rhode Island and Washington. Workday is transparent about how it works to advance workable policies and strike a balance between protecting consumers and driving innovation, as well as providing input in the form of technical language that is informed by policy conversations with lawmakers around the world.

The state bills are not very large. Both Colorado and Connecticut bills include a framework for risk-based decisions, but they are not as broad. The framework does not restrict the use of artificial intelligence, but only systems that make consequential decisions and affect consumer access to those services are considered high risk. (The Connecticut bill would ban the dissemination of political deepfakes and non-consensual explicit deepfakes, for example, but not their creation.) Additionally, definitions of AI vary between the US bills and the AI Act.

The scope of the state bills is different than the ones in the Act. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. The most stringent requirements are used for high-risk applications, which are used in law enforcement.

The UN secretary general’s High Level Advisory Body on AI produced a report that recommended the creation of a panel similar to the Intergovernmental Panel on Climate Change to gather up to date information on artificial intelligence and its risks.

There was hope of a revolution in economic productivity with the amazing capabilities demonstrated by big language models and Chatbots, but now some experts are warning that the rapid development of Artificial Intelligence may cause it to become difficult to control. Not long after ChatGPT appeared, many scientists and entrepreneurs signed a letter calling for a six-month pause on the technology’s development so that the risks could be assessed.

More immediate concerns include the potential for AI to automate disinformation, generate deepfake video and audio, replace workers en masse, and exacerbate societal algorithmic bias on an industrial scale. Nelson says that people feel we need to work together.

Joshua Meltzer says that there is only so much that the Chinese and Americans will agree on. A think tank in Washington, DC. Key differences include privacy and data protection, as well as what values should be embodied by Artificial Intelligence.