History and Evolution of Web Development

Web Development and Generative AI (2024+)


Learning Objectives

  • You know the key trends in AI-assisted web development from 2024-2025.
  • You know issues related to learning with AI-assisted coding tools.

The AI Revolution in Web Development

The integration of artificial intelligence into web development workflows has emerged as perhaps the most transformative trend of the 2020s, significantly aiding how developers write, debug, and understand code.

Year-wise, the key milestones could be viewed as follows:

In 2021, GitHub Copilot, powered by OpenAI’s Codex model, represented the first major breakthrough in AI-assisted coding. It moved beyond simple autocomplete to suggest entire functions based on natural language comments, complete complex patterns, and generate tests. Other tools like Tabnine, Windsurf, and Amazon Q Developer followed with their own approaches to context-aware code completion.

In 2022, GitHub Copilot was made available for teachers, in part inspired by research highlighting the potential of AI tools to assist learning programming.

During 2022 and 2023, large language model (LLM)-backed systems like ChatGPT (2022) and Claude (2023) emerged, introducing new ways for developer assistance. The models were instruction tuned, which allowed prompting them with natural language requests. Developers could now describe desired functionality in natural language and (sometimes) receive working implementations. These models could explain code, suggest optimizations, debug errors, and help with architecture decisions.

At this point, integrating LLM-driven coding tools into IDEs became common, with e.g. GitHub CoPilot being embedded into programming environments, and Codeium offering similar functionality across multiple IDEs.

In part fueled by the efforts to integrate LLMs with other applications, the year 2024 saw a breakthrough in AI agents. AI agents are systems that plan, execute, and iterate with minimal supervision rather than merely suggesting code snippets. Cursor’s Agent mode autonomously selects files, makes multi-file edits, and executes terminal commands. Windsurf’s Cascade agent tracks developer actions to infer intent. Replit Agent 3 has long uninterrupted build sessions where the agent tests itself and fixes issues autonomously.

More recently, in 2024 and 2025, tools that focus on automating entire feature builds from high-level descriptions, have also started to emerge. For example, v0.dev can create functional React components from natural language descriptions.

Loading Exercise...

Although there are plenty of tools that are being developed, the Stack Overflow 2025 Developer Survey highlights caution and that there is plenty of room for improvement. The survey showed that 84% of developers use or plan to use AI tools, yet trust in AI accuracy has dropped from 40% to 29%. The top frustration, from 45% of respondents, is that “AI solutions are almost right, but not quite”, while 66% reported that they’re now spending more time fixing “almost-right” AI-generated code.

A July 2025 study — Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity — tracked experienced developers using AI. The study found developers objectively 19% slower despite the developers believed they were 20% faster; AI created an illusion of productivity even when hindering it.

Loading Exercise...

The Learning Crisis

While AI tools can improve productivity especially in entry-level tasks and routine coding, they have led to concerns about skill development and learning.

The study Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task demonstrates different levels of brain activity depending on what tools are being used to write essays. The study highlights that individuals who use LLMs to write essays struggle with quoting content from the essays that they wrote just a few minutes earlier.

The main concerns relate to the erosion of deep understanding, especially among juniors, who routinely use AI to generate code without fully grasping the underlying concepts. When using LLMs to generate solutions, developers may miss out on learning opportunities that come from struggling with problems and debugging code themselves. The above-mentioned study highlights that frequent use of LLMs reduces deep engagement with the topic or critically examining the content provided by AI:

“When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.”

Loading Exercise...

When using generative AI tools to produce code, teams can similarly experience comprehension debt — a deficit in understanding that accumulates when developers rely on AI to generate code rather than learning to code or creating the code themselves. Over the long term, comprehension debt can lead to situations where developers are unable to effectively maintain or extend AI-generated code because they lack the necessary understanding of how it works — think of this as similar to having to maintain a system that someone else built for you without documentation.

Loading Exercise...

There’s also concerns of how AI tools may erode critical thinking and problem-solving skills among developers. The cognitive shortcut paradox highlights that junior developers need coding experience to use AI tools well, because experience builds the judgment required to evaluate, debug, and improve AI-generated code — but leaning on AI too much in those first stages may lead to not gaining that experience.

Arguably, the traditional model of learning, where we build skills through hands-on coding, debugging, and problem-solving, is becoming even more important than before. The challenge is learning to use AI tools in the mix productively, without letting them to become a crutch that inhibits learning and expertise development.

There’s a widening gap in competences. Those who have strong foundational skills can use AI tools effectively, while those without such skills can struggle to use the tools meaningfully, leading to further skill disparities. Despite the growing gap, novices using AI tools may think that they perform better than they actually do because of the AI assistance, and finish tasks with an illusion of competence.

Loading Exercise...