close
Machine learning & AI

Tech giants forced to share AI secrets. Here’s how it could improve everyone’s lives.

Under the Digital Services Act, the European Commission is requiring 19 tech giants, including YouTube, TikTok, Amazon, and Google, to explain their AI algorithms. A crucial step toward making AI more open and accountable is asking these businesses—platforms and search engines with over 45 million EU users—for this data. This will improve life for everybody.

Computer-based intelligence is supposed to influence each part of our lives — from medical services to instruction, to what we check out and pay attention to, and even the way we compose. However, AI also arouses a great deal of fear, frequently centered on the possibility of a god-like computer becoming smarter than we are or the possibility that a machine tasked with an innocuous task may accidentally destroy humanity. More pragmatically, individuals frequently wonder if AI will eliminate their jobs.

We have been there before: many bank clerks and factory workers have already been replaced by machines and robots without ending employment. However, AI-based productivity enhancements come with two brand-new issues: accountability and transparency. Additionally, if we do not seriously consider the most effective approach to these issues, no one will benefit.

Obviously, at this point, we are accustomed to being assessed by calculations. Before giving us a mortgage, insurance companies and mobile phone providers use software to check our credit scores. Before offering us a ride, ride-sharing apps make sure we are pleasant enough. Humans select a small amount of information for these evaluations: Your credit score is based on your payment history, and your Uber rating is based on how other drivers view you.

On the other hand, new AI-based technologies collect and organize data without human supervision. As a result, it is much more difficult to hold someone accountable or even to comprehend the factors that went into a machine-made rating or decision.

What if you begin to discover that you are not permitted to borrow money or that no one is calling you back when you apply for a job? This could be a result of a mistake about you on the internet.

You have the right to be forgotten in Europe, and you can ask websites to remove false information about you. However, if the incorrect information comes from an unsupervised algorithm, it will be difficult to determine what it is. The precise response is unlikely to be known by any human.

Accuracy may be even worse than errors. What would happen, for instance, if you allowed an algorithm to evaluate your ability to repay a credit by looking at all of the information that is available about you?

If all other factors are equal, a high-performance algorithm might conclude that a woman, a member of an ethnic group that is prone to discrimination, a person who lives in a poor neighborhood, someone who speaks with a foreign accent, or someone who isn’t “good-looking” is less creditworthy.

Algorithms will also “know” that these individuals are less likely to repay their credit due to their lower incomes, as evidenced by research. Even though there are regulations in place to prevent bank employees from discriminating against potential borrowers, an algorithm acting on its own might decide that charging these individuals more to borrow money is appropriate. A vicious circle could be created by statistical discrimination of this kind: You might have trouble making these higher repayments if you have to pay more to borrow the money.

The algorithm could reach similar conclusions based on what you buy, the movies you watch, the books you read, or even the way you write and the jokes that make you laugh, even if you don’t allow it to use data about protected characteristics. However, algorithms are already being used to evaluate students, screen job applications, and assist the police.

The cost of accuracy In addition to concerns about fairness, statistical discrimination has the potential to harm everyone. An investigation of French grocery stores has shown, for example, that when representatives with a Muslim-sounding name work under the oversight of a biased director, the worker is less useful on the grounds that the manager’s bias turns into an unavoidable outcome.

Gender stereotypes have an impact on academic performance, according to Italian school research. Students organize their efforts in accordance with a teacher’s belief that girls are better at literature and math than boys, proving the teacher wrong. A few young ladies who might have been extraordinary mathematicians or young men who might have been astonishing scholars might wind up picking some unacceptable profession, therefore.

We are able to measure and, to a certain extent, correct prejudice when people are involved in decision-making. Yet, it’s difficult to make unaided calculations on the off chance that we don’t have the foggiest idea about the specific data they use to pursue their choices.

Transparency and accountability will therefore be essential if AI is to truly improve our lives—ideally before algorithms are even used in a decision-making process. The EU Artificial Intelligence Act aims to achieve this. Thus, as is often the case, EU rules could immediately become the worldwide norm. Before using sensitive practices like hiring, companies should share commercial information with regulators.

Naturally, balancing this kind of regulation is necessary. The significant tech organizations consider man-made intelligence to be the next big thing, and development in this space is likewise now an international race. However, innovation frequently occurs only when businesses are able to conceal some of their technology, so there is always the possibility that too much regulation will stifle progress.

Some people think that the EU’s stringent data protection laws are the reason it hasn’t been involved in major AI innovations. However, many of the potential economic benefits of AI development could nevertheless backfire unless companies are held accountable for their algorithms’ outcomes.

Provided by The Conversation

Topic : News