Issues and Concerns in the AI Era

Summary


This part explored critical societal and ethical challenges arising from large language models, from copyright disputes through bias and misinformation to privacy risks and environmental costs.

We examined data, copyright, and ownership. LLMs train on vast internet corpora including copyrighted material, raising unresolved legal questions about fair use, creator compensation, and output ownership. Ongoing lawsuits test whether training constitutes transformative use or infringement. Providers typically assign output ownership to users, though legal status remains uncertain as courts may determine AI-generated content lacks copyrightable human authorship.

We studied hallucination, bias, and misinformation. Hallucination occurs when models generate plausible but fabricated content, an inherent characteristic of predicting text patterns rather than retrieving verified facts. Bias emerges from training data reflecting societal inequalities across gender, race, geography, and other dimensions — amplified at unprecedented scale. Misinformation spreads through reproducing false training data, generating convincing but unverifiable claims, and enabling disinformation campaigns. Training choices shape model viewpoints and values, meaning AI systems are never truly neutral.

We explored privacy and security risks. Users may unintentionally expose sensitive information that gets logged or memorized. Security threats include prompt injection manipulating models to bypass restrictions, data poisoning through malicious training data, and exploitation of connected tools enabling unauthorized actions. LLMs enable social engineering at scale through personalized phishing and adaptive scam conversations.

We examined infrastructure and environmental impacts. Training requires massive computational resources — thousands of specialized processors running continuously for weeks, consuming electricity comparable to small towns. Beyond energy, considerations include hardware supply chains requiring rare earth mining, water-intensive cooling straining local resources, and concentration of capabilities among well-resourced organizations raising equity questions.

Finally, we considered human skills and the future of work. AI rarely replaces entire occupations but automates routine tasks, augments complex work, and creates new roles around AI oversight and integration. Distinctly human strengths — empathy, ethical judgment, creativity, contextual understanding, leadership — become increasingly valuable. The distinction between learning and performing matters critically: over-reliance on AI risks skill atrophy. Alternative futures diverge based on whether we maintain human capability development alongside AI integration or allow convenience to erode essential cognitive skills.

Key Takeaways

  • Copyright disputes around training data and output ownership remain legally unresolved, with different jurisdictions taking varying approaches.
  • Hallucination, bias, and misinformation are inherent challenges requiring critical evaluation of outputs rather than assuming accuracy.
  • Privacy and security risks demand careful consideration of what data to share and how systems might be exploited.
  • Environmental costs include direct energy consumption, hardware supply chains, and water usage — with broader implications for equity and access.
  • Human skills in judgment, empathy, creativity, and ethics remain essential and irreplaceable by AI systems.
  • The future depends on deliberate choices about maintaining human capabilities while leveraging AI augmentation rather than enabling cognitive atrophy.
Thank you!

Thank you for attending the course. For grading information, which applies for students in Finland, visit the page Registration and Grading, which outlines the steps to request grading.