The debate on the impact of artificial intelligence (IA) on the future of work is one of the most pressing and polarizing of our age. While some prophesy a mass unemployment dystopia, others paint a utopian future of greater productivity and liberation from boring tasks. In this complex and often contradictory scenario, the Duolingo case emerges as a catalyst of discussion, offering a privileged look on the real dynamics that the massive adoption of AI is triggering. The recent declarations of Luis von Ahn, CEO of the famous language learning platform, according to which the company did not dismiss full-time employees following its transition to a “AI-first” strategy, they initially reassured many, but a more thorough analysis reveals a much more faceted panorama and with deep implications for the global workforce. This narrative, apparently virtuous, conceals in fact a series of structural transformations that, if on the one hand promise an exponential increase in individual production capacity, on the other raise crucial questions on the increasing polarization of skills and the erosion of opportunities for the most vulnerable bands of the labour market. Duolingo’s experience thus becomes a magnifying glass through which to examine not only how companies are integrating AI, but also as individuals, institutions and policy makers must prepare to face a revolution that is redefining the very concept of work, value and social equity. The objective of this article is to deepen these aspects, extending the reflection beyond the mere statistic of dismissals and entering into the complex implications of a true and proper professional metamorphosis.
The IA and the Paradox of the Increase of Productivity: Beyond Replacement
Duolingo’s “AI-first” philosophy, far from being a mere cost-cutting exercise, is an ambitious attempt to redefine the concept of human productivity in the context of artificial intelligence. Luis von Ahn’s words, which emphasise the goal of achieving “very much more and approach our mission” rather than saving money or replacing staff, outline an approach that shifts focus from pure automation to pure automation. human capacity amplification. In this model, AI is not seen as a substitute, but as a powerful tool that allows each individual to reach previously unthinkable levels of output and innovation. The company has integrated AI so deeply that a significant part of its teaching content is now generated or managed by algorithms. This did not eliminate the need for creators of human content, but radically transformed their role: from repetitive and low-value-added tasks performers, they evolved into “creative directors” of artificial intelligence. This means that employees are now called to oversee, direct and refine the work of algorithms, concentrating their energies on strategy, innovation and the maintenance of a high quality standard, freeing themselves from operational “bottlenecks”. A practical example can be a teacher who, instead of manually creating hundreds of grammatical exercises, uses a generative AI to produce thousands in a few minutes, then devoting his time to treating the most effective ones, developing new teaching methodologies or interacting directly with students to better understand their needs. This transformation implies a change of cognitive paradigm for the workers: no more simple “workers” of content, but architects and strategists who orchestrate the potential of AI. We switch from a perspective of execution to governance, where the ability to “spell the language” of AI, to formulate effective prompts, to critically evaluate generated outputs and to creatively integrate these tools into the working process becomes a key competence. The internal sessions of “f-r-A-I-days” in Duolingo, dedicated to experimentation with AI, are an emblematic example of how companies are trying to foster this cultural adaptation, promoting curiosity and exploration as engines of innovation. The exponential increase in the production capacity promised by this man-machine synergy can lead to unprecedented acceleration in the development of new products and services, expanding the market and potentially creating new professional niches. However, it is essential to recognize that this model benefits primarily those who already have the cognitive and strategic skills necessary to interact effectively with AI, laying the foundations for an evolution of the labour market that rewards specialization and ability to advanced critical thinking, leaving behind those who cannot make this transition. This paradox of increased productivity, if not carefully managed, is likely to exacerbate existing inequalities rather than mitigate them.
The Hidden Face of Automation: Precarious and Emergency Inequalities
If Duolingo’s narrative on full-time employees is reassuring, the picture is complicated considerably when considering the impact of AI on temporary and precarious labor force. The company’s admission to reduce dependence on external workers, such as translators or moderators, due to the efficiency of artificial intelligence in these tasks, exposes the hidden face of automation. These figures, often engaged in project contracts, external collaborations or in the thriving sector of the gig economy, represent the first line of contact between technological advancement and occupational precariousness. Their tasks, being often repetitive, standardizable and based on clear rules, are among the most easily automated by IA algorithms. This is not an isolated phenomenon in Duolingo, but a trend that is observed in various sectors, from customer support to drafting basic content, from logistics to moderation of online platforms. AI acts in these contexts as an automation accelerator, quickly eroding opportunities for those who perform outsourcing or project activities, often with minor overalls and limited access to continuing training. The result is an increase in social inequalities, where part of the workforce enjoys stable contracts and AI-enriched roles, while another is on the margins, with increasing difficulties in finding and maintaining employment. This scenario is further aggravated by the tendency of many companies to slow down recruitment for junior roles. If entry-level positions and internship opportunities decrease, the flow of talents traditionally fed the senior workforce is interrupted. Young professionals find fewer open doors to gain the necessary experience, compromising the development of future leadership and specialized skills. This not only creates a generational vacuum, but also deprives the labour market of new perspectives and ideas that are crucial to innovation. Precarization of work, already a significant challenge in the global economy, is likely to be drastically accelerated by AI, turning into a systemic question requesting urgent attention from institutions and policy makers. The concrete risk is that of a society in which the benefits of productivity generated by AI are concentrated in the hands of a few, while the majority is facing a fierce competition for a decreasing number of roles or lower quality jobs, with devastating impacts on social cohesion and economic stability. The rhetoric of the “model is not to replace humans with AI, but to make every human capable of doing much more” finds its limit precisely in these categories of workers, for which the AI presents itself not as a partner, but as a direct competitor.
From Paura to Adaptation: The New Paradigm of Literacy AI
In the face of a future of rapidly evolving work, adaptation is no longer an option but an impelling necessity. The Duolingo case, with its “f-r-A-I-days” dedicated to experimentation with artificial intelligence, offers an interesting model of how a company can actively promote a culture of adaptation and one new technological “literacy” among their employees. This “IA literacy” goes far beyond the mere knowledge of the tools; it implies a deep understanding of the capabilities, limits and ethical implications of artificial intelligence, as well as the ability to integrate these technologies in a critical and creative way into their workflow. For today's and tomorrow's professionals, digital competence is evolving from a series of technical skills to a real one. strategic mentality towards technology. This means developing computational thinking, problem-solving skills through algorithmic tools, critical analysis of AI-generated outputs and mastering of “prompt engineering” techniques to effectively communicate with generative models. Companies play a crucial role in facilitating this transition, not only by providing tools and training, but also by creating environments where error is seen as a learning opportunity and experimentation is encouraged. Upskilling and reskilling programs must become a fundamental component of the business strategy, investing in the growth of their human resources to ensure their relevance in the new economic paradigm. However, responsibility does not fall only on enterprises. Individuals themselves must embrace an approach to continuous and proactive learning. This means not only following specific courses on AI, but also reading, experimenting, participating in online communities and actively seeking ways to apply AI in your professional field. The formal education, from primary schools to university, must also adapt, integrating curricula that not only teach the basics of computer science, but prepare students to think in an “enhanced” way, to collaborate with AI and to develop the unique human skills that will remain complementary, rather than replaceable, from artificial intelligence: creativity, critical thinking, emotional intelligence, ethics and complex problem-solving. The dominant narrative of substitution, as recognized by the CEO of Duolingo, is already rooted in the public imagination. To counter it effectively, it is necessary to provide context, education and positive models of integration. literacy AI is not only a competence for economic survival, but also a tool for a more conscious citizenship in a world increasingly mediated by technology, allowing individuals not to be passive recipients but active agents of change.
Modeling the Future: Policies, Ethics and Research of a Social Equilibrium
The transformation triggered by AI cannot be left only to market dynamics or individual business initiatives; it requires concerted intervention at institutional and regulatory level to ensure that the future of work is fair and sustainable. The rapid technological evolution imposes on the policy maker and institutions to radically rethink existing social and economic structures. One crucial aspect is the development of new labour policies that take into account the increasing flexibility and precariousness. This could include the revision of social protection models, the extension of rights and safeguards to the workers of the gig economy and the experimentation of innovative solutions such as universal basic income (UBI), che potrebbe fornire una rete di sicurezza economica in un’era di automazione diffusa. La legge italiana sull’IA (L. 132/2025), menzionata nel contesto dell’articolo di origine, rappresenta un esempio di come gli stati stiano cercando di fornire un quadro normativo, sebbene sia essenziale che tali leggi non si limitino alla mera regolamentazione tecnica, ma affrontino anche le complesse questioni etiche e sociali. È imperativo stabilire chi è responsabile quando un sistema di IA commette errori o provoca danni, chi detiene la proprietà intellettuale degli output generati dall’IA e come si garantisce la trasparenza e la non discriminazione algoritmica. Le questioni etiche sono al centro di questo dibattito: dobbiamo chiederci non solo “cosa possiamo fare con l’IA”, ma “cosa dovremmo fare”. Questo include la protezione della privacy dei dati, la prevenzione dei bias algoritmici, l’assicurazione dell’equità nell’accesso e nell’utilizzo delle tecnologie IA e la garanzia che l’IA sia sviluppata e impiegata in modo che serva l’interesse pubblico. La collaborazione internazionale è altrettanto fondamentale, poiché l’IA è una tecnologia senza confini. Gli sforzi globali per armonizzare le normative, condividere le migliori pratiche e affrontare le sfide comuni, come la sicurezza informatica (i cyberattacchi agli ospedali, anch’essi menzionati tra gli articoli correlati, evidenziano la vulnerabilità delle infrastrutture critiche), sono essenziali per costruire un futuro digitale resiliente. Inoltre, le istituzioni devono investire massicciamente nell’istruzione e nella formazione continua, creando programmi accessibili e mirati che possano equipaggiare le persone con le competenze necessarie per prosperare nell’economia dell’IA. Questo non è solo un compito per le università, ma per un sistema educativo integrato che coinvolga scuole professionali, centri di formazione e partnership pubblico-private. La ricerca di un equilibrio sociale in un’era dominata dall’IA richiede un approccio olistico che unisca tecnologia, economia, etica e politica per creare un futuro dove i benefici dell’innovazione siano ampiamente distribuiti, anziché esacerbare le disuguaglianze.
The IA between Opportunities and Speculative Bubbles: A Critical Perspective
While the enthusiasm for artificial intelligence pervades every sector, it is crucial to adopt a critical perspective that also consider challenges and risks, going beyond often excessive optimism. The mention of a possible “bolla IA” by Bank of England, as well as the vulnerability of Large Language Models (LLM) to “data poisoning” attacks with only 250 documents, offer a necessary counterpoint to the triumphal narrative of AI as a universal panacea. The concept of “ speculative bubble” suggests that the enthusiasm and investments in AI may have inflated market valuations beyond the real intrinsic value or the ability of these technologies to generate sustainable short-term profits. This does not mean that AI is not revolutionary, but rather that its adoption and economic impact may not be linear and may suffer corrections. History is full of examples of emerging technologies that have crossed phases of hype and disillusionment before reaching a lasting maturity. A bubble, if it burst, could have significant repercussions on the entire technological ecosystem and the global economy, curbing investments and trust. In parallel, the vulnerability of IA models to date poisoning raises serious concerns about their reliability and safety. The LLM, however powerful, are complex systems trained on huge amounts of data. If this data also contains a minimum percentage of malicious or manipulated information, AI can be “compromised”, producing incorrect, bias or even dangerous outputs. This not only undermines confidence in technology, but also presents enormous challenges for cybersecurity, data protection and robustness of systems on which companies and institutions increasingly rely. We imagine an IA system used for medical diagnosis or autonomous driving that has been compromised: the consequences could be catastrophic. These criticalities emphasize the need for a methodical and rigorous approach in the development and implementation of AI, favouring security, robustness and verification of systems. Transparency andExplicability of AI, that is the ability to understand reasoning behind the decisions of an algorithm, become fundamental requirements, not only for the trust of the public, but also to identify and mitigate potential vulnerabilities. The enthusiasm for AI must be tempted by a realistic awareness of its intrinsic limits and risks. Only through careful management and robust ethical and safety framework will it be possible to ensure that artificial intelligence is truly at the service of humanity, without falling into traps of speculation or its intrinsic fragility.
Towards an Increased Labour Future, Conscious and Inclusive
The Duolingo case, with its reassurances and shadows, acts as a powerful metaphor for the broadest debate on the future of work in the age of artificial intelligence. It is evident that the simplified narrative of a massive human replacement by machines does not hold a thorough analysis. However, it is equally clear that AI is not a neutral force and its impact is anything but evenly beneficial. The main lesson is that AI is not limited to firing or hiring; it radically transforms roles, skills and expectations within the professional world. While full-time employees in innovative business contexts can see their tasks enrich themselves and their amplified capabilities, temporary and precarious workers, often the least protected and most exposed, are likely to suffer erosion of their opportunities, feeding a cycle of increasing inequalities. The challenge is not to resist AI, but to learn to coexist with it in a way that is productive, ethical and socially fair. This requires a multidimensional commitment: from individuals who have to embrace continuous training and literacy AI as fundamental skills for economic survival, to companies that must rethink their operational models and their investments in human capital, to governments and institutions that are called to shape a normative and social framework that mitigates risks and distributes benefits. It is not only a question of technological innovation, but of innovating socially. We need to ask ourselves fundamental questions about how to redefine the value of human work, how to build effective security networks in an increasingly automated economy and how to ensure that access to new opportunities created by AI is not a privilege for a few, but a right for many. The “IA bubble” and vulnerabilities such as date poisoning remind us that technological progress, however exciting, is not immune to risks and weaknesses that require vigilance and robust solutions. The future of work with AI is not written; it is an ongoing work that we can and must form collectively. The search for a balance between efficiency and equity, between innovation and inclusiveness, between opportunities and responsibility, will be the compass that will guide us towards an era in which artificial intelligence can really serve humanity as a whole, creating a world of work increased, conscious and deeply inclusive.






