Duolingo, AI and Future of Work: Beyond Licensing, the Challenge

IA and Work: Duolingo between Productivity and Inequalities

The debate on the impact of artificial intelligence (IA) on the future of work is one of the most pressing and polarizing of our age. While some prophesy a mass unemployment dystopia, others paint a utopian future of greater productivity and liberation from boring tasks. In this complex and often contradictory scenario, the Duolingo case emerges as a catalyst of discussion, offering a privileged look on the real dynamics that the massive adoption of AI is triggering. The recent declarations of Luis von Ahn, CEO of the famous language learning platform, according to which the company did not dismiss full-time employees as a result of its transition to a “AI-first” strategy, they initially reassured many, but a more thorough analysis reveals a much more faceted panorama and with deep implications for the global workforce. This narrative, apparently virtuous, in fact hides a series of structural transformations that, if on the one hand promise an exponential increase in individual production capacity, on the other raise crucial questions on the increasing polarization of skills and the erosion of opportunities for the most vulnerable bands of the labour market. The experience of Duolingo thus becomes a magnifying glass through which to examine not only how companies are integrating AI, but also as individuals, institutions and policy makers must prepare to face a revolution that is redefining the very concept of work, value and social equity. The objective of this article is to deepen these aspects, extending the reflection beyond the mere statistic of dismissals and entering into the complex implications of a true and proper professional metamorphosis.

The IA and the Paradox of the Increase of Productivity: Beyond Replacement

Duolingo’s “AI-first” philosophy, far from being a mere cost-cutting exercise, is an ambitious attempt to redefine the concept of human productivity in the context of artificial intelligence. Luis von Ahn’s words, which emphasise the goal of achieving “very much more and approach our mission” rather than saving money or replacing staff, outline an approach that shifts focus from pure automation to pure automation human capacity amplification. In this model, AI is not seen as a substitute, but as a powerful tool that allows each individual to achieve previously unthinkable output and innovation levels. The company has integrated AI so deeply that a significant part of its teaching content is now generated or managed by algorithms. This did not eliminate the need for creators of human content, but radically transformed their role: from repetitive and low added value tasks, they evolved into “creative directors” of artificial intelligence. This means that employees are now called to oversee, direct and refine the work of algorithms, concentrating their energies on strategy, innovation and the maintenance of a high quality standard, freeing themselves from operational “bottlenecks”. A practical example can be a teacher who, instead of manually creating hundreds of grammatical exercises, uses a generative AI to produce thousands in a few minutes, then devote his time to treating the most effective ones, developing new teaching methodologies or interacting directly with students to better understand their needs. This transformation involves change of cognitive paradigm for workers: no more simple “workers” of content, but architects and strategists orchestrating the potential of AI. We switch from a perspective of execution to governance, in which the ability to “spell the language” of AI, to formulate effective prompts, to critically evaluate generated outputs and to creatively integrate these tools into the working process becomes a key competence. The internal sessions of Duolingo’s “f-r-A-I-days”, dedicated to experimentation with AI, are an emblematic example of how companies are trying to foster this cultural adaptation, promoting curiosity and exploration as engines of innovation. The exponential increase in the production capacity promised by this man-machine synergy can lead to unprecedented acceleration in the development of new products and services, expanding the market and potentially creating new professional niches. However, it is essential to recognize that this model benefits primarily those who already have the cognitive and strategic skills necessary to interact effectively with AI, laying the foundations for an evolution of the labour market that rewards specialization and ability to advanced critical thinking, leaving behind those who cannot make this transition. This paradox of increased productivity, if not carefully managed, is likely to exacerbate existing inequalities rather than mitigate them.

The Hidden Face of Automation: Precarious and Emergency Inequalities

If Duolingo’s narrative of full-time employees is reassuring, the picture is considerably complicated when considering the impact of AI on the temporary and precarious workforce. The company’s admission to reduce dependence on external workers, such as translators or moderators, due to the efficiency of artificial intelligence in these tasks, exposes the hidden face of automation. These figures, often engaged in project contracts, external collaborations or in the thriving sector of the gig economy, represent the first line of contact between technological advancement and occupational precariousness. Their tasks, being often repetitive, standardizable and based on clear rules, are among the most easily automated by IA algorithms. This is not an isolated phenomenon in Duolingo, but a trend that is observed in various sectors, from customer support to drafting basic content, from logistics to moderation of online platforms. AI acts in these contexts as an automation accelerator, quickly eroding opportunities for those who perform outsourcing or project activities, often with minor overalls and limited access to continuing training. The result is an increase in social inequalities, where part of the workforce enjoys stable contracts and AI-enriched roles, while another is on the margins, with increasing difficulties in finding and maintaining employment. This scenario is further aggravated by the tendency of many companies to slow recruitment for junior roles. If entry-level positions and internship opportunities decrease, the flow of talents traditionally fed the senior workforce is interrupted. Young professionals find fewer open doors to gain the necessary experience, compromising the development of future leadership and specialized skills. This not only creates a generational vacuum, but also deprives the labour market of new perspectives and ideas that are crucial to innovation. Precarization of work, already a significant challenge in the global economy, is likely to be drastically accelerated by AI, turning into a systemic question requesting urgent attention from institutions and policy makers. The concrete risk is that of a society in which the benefits of productivity generated by AI are concentrated in the hands of a few, while the majority is facing a fierce competition for a decreasing number of roles or lower quality jobs, with devastating impacts on social cohesion and economic stability. The rhetoric of the “model is not to replace humans with AI, but to make every human capable of doing much more” finds its limit precisely in these categories of workers, for which AI is presented not as a partner, but as a direct competitor.

From Paura to Adaptation: The New Paradigm of Literacy AI

In the face of a future of rapidly evolving work, adaptation is no longer an option but an impelling necessity. The Duolingo case, with its “f-r-A-I-days” dedicated to experimentation with artificial intelligence, offers an interesting model of how a company can actively promote a culture of adaptation and one new technological “literacy” among their employees. This “IA literacy” goes far beyond the mere knowledge of the tools; it implies a deep understanding of the capabilities, limits and ethical implications of artificial intelligence, as well as the ability to integrate these technologies in a critical and creative way into their workflow. For today's and tomorrow's professionals, digital competence is evolving from a series of technical skills to a real one strategic mentality to technology. This means developing computational thinking, problem-solving skills through algorithmic tools, critical analysis of AI-generated outputs and mastering of “prompt engineering” techniques to effectively communicate with generative models. Companies play a crucial role in facilitating this transition, not only by providing tools and training, but also by creating environments where error is seen as a learning opportunity and experimentation is encouraged. Upskilling and reskilling programs must become a fundamental component of the business strategy, investing in the growth of their human resources to ensure their relevance in the new economic paradigm. However, responsibility does not fall only on enterprises. Individuals themselves must embrace an approach to continuous and proactive learning. This means not only to follow specific courses on AI, but also to read, experiment, participate in online communities and actively seek ways to apply AI in your professional field. The formal education, from primary schools to university, must also adapt, integrating curricula that not only teach the basics of computer science, but prepare students to think in an “enhanced” way, to collaborate with AI and develop the unique human skills that will remain complementary, rather than replaceable, from artificial intelligence: creativity, critical thinking, emotional intelligence, ethics and complex problem-solving. The dominant narrative of substitution, as recognized by the CEO of Duolingo, is already rooted in the public imagination. To counter it effectively, it is necessary to provide context, education and positive models of integration. literacy AI is not only a competence for economic survival, but also a tool for a more conscious citizenship in a world increasingly mediated by technology, allowing individuals not to be passive recipients but active agents of change.

Modeling the Future: Policies, Ethics and Research of a Social Equilibrium

The transformation triggered by AI cannot be left only to market dynamics or individual business initiatives; it requires concerted intervention at institutional and regulatory level to ensure that the future of work is fair and sustainable. The rapid technological evolution imposes on the policy maker and the institutions to radically rethink existing social and economic structures. One crucial aspect is the development of new labour policies that take into account the increasing flexibility and precariousness. This could include the revision of social protection models, the extension of rights and safeguards to the workers of the gig economy and the experimentation of innovative solutions such as universal basic income (UBI), which could provide an economic security network in a widespread automation era. The Italian law on AI (L. 132/2025), mentioned in the context of the article of origin, is an example of how states are trying to provide a regulatory framework, although it is essential that such laws do not limit themselves to merely technical regulations, but also address complex ethical and social issues. It is imperative to determine who is responsible when an IA system commits errors or causes damage, who holds the intellectual property of AI-generated outputs and how transparency and algorithmic non-discrimination are guaranteed. The ethical issues are at the centre of this debate: we must ask ourselves not only “what can we do with AI”, but “what should we do”. This includes data privacy protection, algorithmic bias prevention, equity insurance in access and use of IA technologies and the guarantee that AI is developed and used so that it serves the public interest. International collaboration is just as fundamental as AI is a seamless technology. Global efforts to harmonize regulations, share best practices and address common challenges, such as cyberattacks to hospitals, also mentioned among related articles, highlight the vulnerability of critical infrastructures), are essential to build a resilient digital future. In addition, institutions must invest heavily in continuing education and training, creating accessible and targeted programs that can equip people with the skills needed to thrive in the IA economy. This is not only a task for universities, but for an integrated educational system involving professional schools, training centers and public-private partnerships. The search for a social balance in an age dominated by AI requires a holistic approach that combines technology, economics, ethics and politics to create a future where the benefits of innovation are widely distributed, rather than exacerbate inequalities.

The IA between Opportunities and Speculative Bubbles: A Critical Perspective

While the enthusiasm for artificial intelligence pervades every sector, it is crucial to adopt a critical perspective that also consider the challenges and risks, going beyond often excessive optimism. The mention of a possible “bolla IA” by Bank of England, as well as the vulnerability of Large Language Models (LLM) to “data poisoning” attacks with only 250 documents, offer a necessary counterpoint to the triumphal narrative of AI as a universal panacea. The concept of “ speculative bubble” suggests that enthusiasm and investments in AI may have inflated market evaluations beyond real intrinsic value or the ability of these technologies to generate sustainable short-term profits. This does not mean that AI is not revolutionary, but rather that its adoption and economic impact may not be linear and may suffer corrections. History is full of examples of emerging technologies that have crossed phases of hype and disillusionment before reaching a lasting maturity. A bubble, if it burst, could have significant repercussions on the entire technological ecosystem and the global economy, curbing investments and trust. In parallel, the vulnerability of IA models to date poisoning raises serious concerns about their reliability and safety. The LLM, however powerful, are complex systems trained on huge amounts of data. If this data also contains a minimum percentage of malicious or manipulated information, AI can be “compromised”, producing incorrect, bias or even dangerous outputs. This not only undermines confidence in technology, but also presents enormous challenges for cybersecurity, data protection and the robustness of systems on which companies and institutions increasingly rely. We imagine an IA system used for medical diagnosis or autonomous driving that has been compromised: the consequences could be catastrophic. These criticalities emphasize the need for a methodical and rigorous approach in the development and implementation of AI, favouring security, robustness and verification of systems. Transparency andexplicability of AI, that is the ability to understand reasoning behind the decisions of an algorithm, become fundamental requirements, not only for the trust of the public, but also to identify and mitigate potential vulnerabilities. The enthusiasm for AI must be tempted by a realistic awareness of its intrinsic limits and risks. Only through careful management and robust ethical and safety framework will it be possible to ensure that artificial intelligence is truly at the service of humanity, without falling into traps of speculation or its intrinsic fragility.

Towards an Increased Labour Future, Conscious and Inclusive

The Duolingo case, with its reassurances and shadows, acts as a powerful metaphor for the broadest debate on the future of work in the age of artificial intelligence. It is evident that the simplified narrative of a massive human replacement by machines does not hold a thorough analysis. However, it is equally clear that AI is not a neutral force and its impact is anything but evenly beneficial. The main lesson is that AI is not limited to firing or hiring; it radically transforms roles, skills and expectations within the professional world. While full-time employees in innovative business contexts can see their enrichment and amplified skills, temporary and precarious workers, often the least protected and most exposed, are likely to suffer erosion of their opportunities, feeding a cycle of increasing inequalities. The challenge is not to resist AI, but to learn to coexist with it in a way that is productive, ethical and socially fair. This requires a multidimensional commitment: from individuals who have to embrace continuous training and literacy AI as fundamental skills for economic survival, companies that must rethink their operational models and their investments in human capital, to governments and institutions that are called to shape a normative and social framework that mitigates risks and distributes benefits. It is not only a question of technological innovation, but of social innovation. We must ask ourselves fundamental questions about how to redefine the value of human work, how to build effective security networks in an increasingly automated economy and how to ensure that access to new opportunities created by AI is not a privilege for a few, but a right for many. The “IA bubble” and vulnerabilities such as date poisoning remind us that technological progress, however exciting, is not immune to risks and weaknesses that require vigilance and robust solutions. The future of work with AI is not written; it is an ongoing work that we can and must collectively shape. The search for a balance between efficiency and equity, between innovation and inclusiveness, between opportunities and responsibility, will be the compass that will guide us towards an era in which artificial intelligence can really serve humanity as a whole, creating a world of work increased, conscious and deeply inclusive.

EnglishenEnglishEnglish