Strict rules are being made in Europe to overcome Artificial Intelligence
Lawmakers in European countries are drawing up detailed AI regulation to control the use of artificial intelligence. If Parliament passes the law, the European Union would be the first major region after China to introduce AI regulation. Heated debate has erupted over the proposed legislation. Big companies opposing the law want the scope of the law to be more-increased or limited.
The law will be discussed among the member states
EU lawmakers are close to agreeing on a draft law. After passing, the law will be discussed among the member states. The LUJENS Act could ban controversial uses of AI, such as social scoring and public facial recognition. Companies will also have to declare the use of copyright material in the training of AI. Many fear that governments will use AI to determine the social scoring of their citizens. Under this, according to behavior, crimes and financial transactions, people can be given social scores according to behavior, crimes and financial transactions.
EUs rules can become the global standard for building and running your own IT systems through technology
Amba Kak, director of the research group AI Now Institute, says the EU AI Act could certainly set the tone for regulation around the world. The most important point of the Act is whether general purpose AI like Chat GPT should be considered high risk and brought under the purview of strict rules and penalties. Big tech companies argue that treating general- purpose AI as high risk will stifle innovation. Technologists, on the other hand, argue that ‘exempting’ AI systems from the new rules would be akin to passing social media rules that do not apply to Facebook or Tik Tok.
Small companies are being held responsible
Big tech companies like Google and Microsoft have invested billions of dollars in AI, according to a report by transparency group Corporate Europe Observatory. She is the one arguing against the EU proposal. The report states, large. Companies think that when small companies often use high risk general purpose AI to build applications, then it becomes dangerous. The document sent by Google to the EU states, the purpose of General Purpose Systems is neutral. His design is smart. But they are not high risk themselves.
In which machines are trained to perform jobs and make decisions on their own by studying huge volumes of data is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising major gains in productivity.
But as the systems become more sophisticated it can be harder to understand why the software is making a decision. Also a problem that could get worse as computers become more powerful. Researchers have raised ethical questions about its use. Suggesting that it could perpetuate existing biases in society, invade privacy or result in more jobs being automated.
“AI vendors will be extremely focussed on these proposals. As it will require a fundamental shift in how AI is designed,” said Herbert Swaniker, a lawyer at Clifford Chance.
The European Parliament and EU member states will both consider the proposals and they are likely to change as part of that process.
But if passed, the regulations would apply “inside and outside the EU”. If an AI system was available in the EU “or its use affects people located in the EU”.
You Can Also Read