Elon Musk: “Will Those Who Write The Algorithms Ever Realize Their Negativity Bias?”

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

As part of an insightful discussion on Twitter, Elon Musk and followers deliberated over the effects of bias on algorithms. In doing so, they opened up conversations about the role of algorithms in our lives and the ways that algorithms persuade us to think and behave in particular ways. The Tesla CEO spoke to the responsibility that “those who write the algorithms” have and underscored the importance of thinking carefully about the labels used during algorithm development.

“Algorithm bias” refers to systematic and repeatable errors in a computer system that create unfair outcomes. This can be privileging one arbitrary group of users over others, favoring particular solutions to problems over equally viable ones, or creating privacy violations, for example. Such bias occurs as a result of who builds the algorithms, how they’re developed, and how they’re ultimately used.

What’s clear is that they are sophisticated and pervasive tools for automated decision-making. And a lot depends on how an individual artificial intelligence system or algorithm was designed, what data helped build it, and how it works.

Behind the Looking Glass

Algorithms are aimed at optimizing everything. The Pew Research Center argues that algorithms can save lives, make things easier, and conquer chaos. But there’s also a darker, more ominous side to algorithms. Artificial intelligence and machine learning are becoming common in research and everyday life, raising concerns about how these algorithms work and the predictions they make.

As researchers at New York University and the AI Now Institute outline, predictive policing tools can be fed “dirty data,” including policing patterns that reflect police departments’ conscious and implicit biases, as well as police corruption.

Stinson at the University of Bonn speaks to classification, especially iterative information-filtering algorithms, which “create a selection bias in the course of learning from user responses to documents that the algorithm recommended. This systematic bias in a class of algorithms in widespread use largely goes unnoticed, perhaps because it is most apparent from the perspective of users on the margins, for whom ‘Customers who bought this item also bought…’ style recommendations may not to produce useful suggestions.”

Rozaldo conducted research that revealed, in addition to commonly identified gender bias, large-scale analysis of sentiment associations in popular word-embedding models display negative biases against middle- and working-class socioeconomic status, male children, senior citizens, plain physical appearance, and intellectual phenomena such as Islamic religious faith, non-religiosity, and conservative political orientation.

Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!

Algorithms and the CleanTech World

AI systems are often artificial neural networks, meaning they are computing systems that are designed to analyze vast amounts of information and learn to perform tasks in the same way your brain does. The algorithms grow through machine learning and adaptation. We’ve been writing quite a bit on this at CleanTechnica.

A constant thread through all these articles is the concept that algorithms have profound implications for critical decisions, and a machine’s thought process must be fully trustworthy and free of bias if it is not going to pass on bias or make mistakes. Clearly, there is still work to be done at the same time that artificially intelligent personal assistants, diagnostic devices, and automobiles become ubiquitous.

Final Thoughts

Wired article posed the questions, “Are machines racist? Are algorithms and artificial intelligence inherently prejudiced?” They argue that the tech industry is not doing enough to address these biases, and that tech companies need to be training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it.

One researcher who admits to having created a biased algorithm offers suggestions for alleviating that outcome in the future:

  • Push for algorithms’ transparency, where anyone could see how an algorithm works and contribute improvements — which, due to algorithms’ often proprietary nature, may be difficult.
  • Occasionally test algorithms for potential bias and discrimination. The companies themselves could conduct this testing, as the House of Representatives’ Algorithm Accountability Act would require, or the testing could be performed by an independent nonprofit accreditation board, such as the proposed Forum for Artificial Intelligence Regularization (FAIR).

Harvard Business Review suggests additional layers of prevention in which businesses can engage so that algorithmic bias is mitigated:

  • Incorporate anti-bias training alongside AI and ML training.
  • Spot potential for bias in what they’re doing and actively correct for it.
  • In addition to usual Q&A processes for software, AI needs to undergo an additional layer of social Q&A so that problems can be caught before they reach the consumer and result in a massive backlash.
  • Data scientists and AI engineers training the models need to take courses on the risks of AI.

And as we return to the inspiration for this article, Tesla CEO Elon Musk, we can also look at his vision for Level 5 autonomy. With his consciousness about algorithms and negativity bias, there’s hope that the newest and best in the highest levels of driver assistance will incorporate the most innovative R&D, so that Tesla sets an example of being Bias Detectives — researchers striving to make algorithms fair.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video

Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Carolyn Fortuna

Carolyn Fortuna, PhD, is a writer, researcher, and educator with a lifelong dedication to ecojustice. Carolyn has won awards from the Anti-Defamation League, The International Literacy Association, and The Leavey Foundation. Carolyn is a small-time investor in Tesla and an owner of a 2022 Tesla Model Y as well as a 2017 Chevy Bolt. Please follow Carolyn on Substack: https://carolynfortuna.substack.com/.

Carolyn Fortuna has 1286 posts and counting. See all posts by Carolyn Fortuna