Skip to content

The Risks and Biases of Artificial Intelligence: Last Week Tonight (HBO)

The Age of Artificial Intelligence: Boon or Bane for Humanity?

Artificial intelligence (AI) has been advancing at a breakneck pace, infiltrating virtually every industry from finance to healthcare. Tech giants like Google, Microsoft and IBM are pouring billions into developing self-learning algorithms, machine learning models and neural networks that can analyze data, identify patterns, make predictions and even mimic human conversation.

The potential for AI to revolutionize medicine, transportation, education and more has evoked comparisons to groundbreaking innovations like electricity, computers and the Internet. However, alongside the vast promise lurks genuine peril if development and adoption of thinking machines are not thoughtfully assessed and regulated. As evidenced by high-profile failures, AI systems can easily absorb and amplify human biases, breach privacy, violate civil rights and spin out of our control unless proper precautions are implemented.

How Does Artificial Intelligence Work?

While AI conjures images of ultra-intelligent robots and computers like HAL 9000 from "2001: A Space Odyssey", most current applications would be better described as narrow or weak AI. This means they can excel at one specialized task like playing chess or Go, navigating self-driving cars, or transcribing speech to text. General AI that can reason, plan, communicate and display common sense across different cognitive domains does not yet exist, though it remains the ultimate goal.

Modern AI and machine learning models are trained through a process called deep learning. Instead of hard-coding software with explicit instructions, engineers feed these networks huge datasets and let the algorithms teach themselves through trial and error. The most common method uses neural networks which are computing systems modeled after the human brain and nervous system. These contain connected layers of hardware and software that mimic neurons and synapses getting progressively more abstract from inputs towards outputs.

So for an image recognition tool, the input layer would analyze pixel color values, edges and textures. Subsequent layers would piece together lines, shapes, textures into motif detectors for eyes, mouths, wheels. Further upstream, motifs would configure into components like faces, objects, backgrounds. Finally at the output layer, fully rendered concepts like “cat” “dog” and associated metadata emerge. The neural network compares final guesses against human-tagged ground truths during training, then keeps adjusting internal weightings until accuracy improves. This cycle repeats across vast datasets with comprehensive samples until performance plateaus.

Such iterative self-adjustment enables AI to handle complex nuanced tasks like generating natural speech and language, a capability vividly demonstrated by chatbots like Google Duplex. Duplex can call restaurants to book reservations while sounding fully human, factoring in verbal tics and confirmations a rigid script would miss. Its prowess showcases AI’s versatility yet also raise unease on how seamlessly machines can now masquerade as people.

Investment & Market Growth
As per a 2022 Stanford University study, over 200 AI-focused companies have been acquired since 2010 including big ticket deals like Microsoft paying $19.7 billion for speech recognition pioneer Nuance. Total investments into AI startups stood at $93.5 billion across over 4000 deals in the past decade. The natural language processing sector attracted the most funding with 29% share followed by general machine learning at 24% and computer vision at 15%.

const ctx = document.getElementById(‘myChart‘);

new Chart(ctx, {
type: ‘bar‘,
data: {
labels: [‘Healthcare‘, ‘Finance‘, ‘Transportation‘, ‘Retail‘, ‘Technology‘],
datasets: [{
label: ‘AI Investment Share % ‘,
data: [18, 22, 15, 9, 36],
borderWidth: 1
}] },
options: {
scales: {
y: {
beginAtZero: true
}
}
}
});

The market for artificial intelligence platforms is forecasted to grow from $62B in 2022 to $422B by 2029 conveying fierce momentum. Nearly 60% of businesses have already adopted some form of AI technology. Investment is racing ahead of regulation however with pressure mounting on governments to implement guardrails before unintended consequences spiral.

Limitless Potential for Good

When designed conscientiously and fed unbiased data, AI stands to unlock tremendous opportunities for improving lives, Streamlining businesses and solving intractable problems…

Perilous Pitfalls

Unfortunately, alongside groundbreaking capabilities come equally formidable perils if AI systems absorb societal prejudices, breach ethical boundaries or spin dangerously out of control…

A look at some prominent real-world examples exposes the degree AI systems can betray expectations and human wellbeing absent foresight and vigilance from teams directing them:

Racist Chatbots:

Microsoft’s disastrous Tay chatbot fiasco in 2016 underscored the havoc unleashed when immature AI is let loose on public platforms. Launched as an AI companion to chat with teens on Twitter, Tay began spouting racist, misogynistic gibberish within 24 hours, requiring the plug to be pulled completely. Anti-Tay trolls discovered they could easily manipulate outputs by bombarding it with slurs, fake news and toxic messaging. Unprepared to filter lexical toxicity, Tay absorbed theverbal sewage and regurgitated it as its own voice.

The episode conveyed how impressionable untrained AI can devolve into hideous mutations of the ugliest human prejudices. It reflected poorly on the diligence and and oversight of those managing the systems. If large tech firms couldn’t constrain basic chatbots, public trust in more advanced applications like self-driving vehicles would face understandable skepticism going forward.

Biased Facial Recognition:

Where pattern recognition blindspots exist in AI training data, the potential for discrimination creeps in. In 2015, a leading facial analysis program called Face++ turned out to be egregiously worse at recognizing darker female faces compared to lighter male ones. On a dataset of parliamentarians from South Africa, Face++ had error rates of up to 34.5% for Black women compared to 0.8% for white men. The bias arose because most facial recognition datasets historically contained predominantly white male faces. So for ethnic minorities, the neural networks had sparse samples to calibrate from, causing wild inaccuracies. Researchers called out the exclusion as unjustly reducing opportunities for already marginalized groups. Yet tackling such blind spots through representative data is enormously challenging when even humans struggle to pinpoint the myriad micro-expressions we subconsciously parse. Machines stumbling here expose a subtle human bias transferred onto algorithms.

Lethal Self-Driving Accidents:

When autonomous vehicles falter, public safety is threatened. In March 2018, the first human pedestrian death involving a self-driving car occured when an Uber test vehicle failed to detect 49-year old Elaine Herzberg crossing a nighttime road. The backup driver was also distracted contributing to the test vehicle fatally striking Herzberg at 40mph. Uber had disabled Volvo automatic braking features delaying reaction. The incident shattered perceptions of autonomous vehicle readiness and reliability overnight. Uber suspended all testing indefinitely as trust hemorrhaged. The episode spotlight lack of industry testing standards and patchy safety frameworks relative to AI capabilities pushing aggressively forward. New protocols limiting driver distractions were clearly needed. But thorny questions around legal liability and restitution post such episodes remained frustratingly unresolved.

Recommendations for Responsible AI Development
So how exactly should conscientious AI design, testing and deployment proceed such that humanity secures more benefits than harms? Following are evidence-based recommendations…

The Policy Landscape

Laws and regulations around ethical AI development remain highly fragmented globally but tightening steadily. The EU’s Artificial Intelligence Act goes into effect in 2023 as the most comprehensive cross-sector attempt at governance so far. It mandates risk-based requirements and proportional accountability across all participants along the AI value chain. So software developers, infrastructure providers and implementing companies all share in ensuring legal compliance.

The Act defines four risk categories spanning from minimal like chatbots and video games, to high for applications like mass surveillance and worker productivity monitoring tools which require full compliance documentation before deployment in Europe. Facial recognition in public by law enforcement is prohibited outright echoing earlier targeted bans in US cities like San Francisco. Strict monitoring will govern specially designated “high risk” sectors like healthcare, transport and public infrastructure where AI risks public harm. Scientists contend bias screening thresholds in certain sensitivity use cases may need to ratchet considerably higher to account for demographic variation across skin tones, accents, languages etc. So ongoing fluid policy evolution remains inevitable.

Outside the EU, approaches are rapidly coalescing around similar pillars of transparency, accuracy, contestability and accountability. In the US, an Algorithmic Accountability Act has repeatedly come before Congress focusing primarily on tackling bias. It would grant the Federal Trade Commission powers to assess automated systems for discrimination potential with substantial penalties against companies for violations. While yet to pass, momentum is building with AI now under mainstream scrutiny.

Emerging Innovations & Countermeasures

Reputable labs worldwide have mobilized extensively around negating various threats posed by AI’s unchecked spread. These span innovations like IBM’s AI FactSheets that disclose model card details as nutritional labels do for food products. To counter facial recognition weaknesses, IBM released Diversity in Faces, a dataset of 1 million images annotated with age, gender and skin tones to expand training samples for underrepresented groups.

Explainable AI techniques are also gaining traction to peel back neural network opacity enabling human auditors to parse model decisions factors. Defense departments are even unleashing “AI vs AI” cyber games where systems probe and attack each other to uncover weaknesses. And in defiance of the rampant digital fakery threatening democracy and truth, forensic detection toolkits by companies like Truepic can validate image and video authenticity through hierarchical verification layers. Such ethical countermeasures offer reassurance while no panacea.

The Winding Road Ahead
Inevitably, experts disagree on timelines and projections for advanced AI…