The new digital divide: those who know how to dialogue with AI and those who don't yet

13/10/2025
David Lahoz

Twenty years ago, the digital divide separated those who had internet from those who didn't. Today, with over 60% of the planet connected, the problem has mutated.

Twenty years ago, the digital divide separated those who had internet access from those who didn't. Today, with over 60% of the planet connected, the problem has mutated.

Digital inequality now presents multiple dimensions: bandwidth, access costs, digital skills and, most recently, the ability to interact effectively with artificial intelligence systems.

The ability to dialogue with AI has consolidated as a strategic professional competency. Its unequal distribution generates a new division: between those who manage to amplify their productive capacity through technology and those who obtain limited or counterproductive results.

From information search to structured dialogue

For two decades, fundamental digital competency consisted of mastering search engines. The difference now is established by the capacity to interact with systems that interpret statistical patterns without reasoning in human form. The paradigm has evolved: from issuing commands to establishing a dialogue that requires precision, context and structure.

Data supports this transformation. MIT research conducted with Boston Consulting Group consultants demonstrated that, for tasks within AI's range of capabilities, GPT-4 users completed 38% more tasks in 25% less time, with 40% higher quality.

A parallel study published in Science documented 18% improvements in the quality of professional texts and 40% reductions in the time required for their production.

However, results present significant nuances. When AI was employed for tasks outside its optimal range, performance dropped 19 percentage points. More revealing still: a 2025 study with experienced developers showed that, using AI tools, work time increased by 19%, although professionals perceived the opposite.

In sectors such as marketing or digital advertising, the difference between an improvised instruction and a well-structured one proves determinant. Value doesn't reside exclusively in technical formulation, but in understanding when AI provides real value and when it represents an obstacle.

Language as new capital

Pierre Bourdieu defined in 1973 the concept of cultural capital: the set of knowledge, codes and habits that confer social advantage. We could now speak of digital conversational capital: the capacity to interact productively with artificial intelligence systems.

Cultural capital has historically functioned as a mechanism for reproducing privilege rather than as an instrument of equality. Without deliberate educational intervention, this new digital capital will follow the same trajectory: a minority with early access and structured training, and a majority that remains on the margins.

In business and academic environments, this division is already perceptible. Reality, however, shows that transformation is in its initial phase. In February 2024, only 5.4% of companies had formally adopted generative AI, although 78% declared "using AI" in some way. In practice, experimentation prevails over systematic integration.

Employees using generative AI save on average 5.4% of their workday, approximately 2.2 hours per week. This represents a measurable improvement, but distant from more ambitious projections.

Among young people, 77% used generative AI in 2024. Significantly, nearly half incorporate their own criteria, and 40% systematically review results. They maintain a balanced stance: accepting the possibility of error without falling into blind trust.

The main risk is technological elitism: if quality training remains concentrated, the gap will amplify.

Beyond prompting technique

Public debate has focused excessively on instruction formulation techniques. Empirical evidence suggests that technical mastery is necessary but insufficient.

In the MIT study, participants who received structured training on GPT-4 use obtained superior results to those who only had access to the tool. Methodology proves determinant.

Effective AI use doesn't depend on memorizing templates or "thinking clearly" in a generic way. It requires a combination of factors: understanding the problem, knowing the model's limitations, adequate structuring of requests and critical validation of results.

An illustrative example: a marketing analyst requests "strategies to reduce cost per click in retail." Without specifying context (geographic market, budget, seasonality), they'll obtain a generic response. Even with context, they must understand that AI provides assisted reasoning, not independently verified knowledge.

AI literacy transcends the technical; it poses epistemological questions. It implies knowing what information is sought, why it's requested, how to validate what the system returns and, fundamentally, when to do without these tools.

Education, business environment and realistic perspective

This gap won't resolve spontaneously or quickly.

The current context demands integrating interaction with algorithms into training processes, without neglecting preceding gaps: infrastructure, cost, access and basic digital literacy.

AI literacy should be incorporated into academic curriculum, not as a technocratic discipline, but as training in critical thinking: source evaluation, bias identification, information contrast and development of autonomous criteria.

In the business sphere, training cannot be limited to operational workshops on specific tools. The key competency consists of learning to think in collaboration with AI, not simply to use it. This includes designing adequate tasks, validating results and, crucially, recognizing when the tool adds value and when it reduces it.

The paradox is significant: working effectively with AI requires enhancing human capabilities. Active listening, contextualization, synthesis, constructive skepticism and systematic verification. Systems process data; they don't think.

Dialogue and critical judgment remain distinctly human competencies. The current phase is fundamentally experimental. Declared "adoption" doesn't reflect deep integration. Successful cases and instructive failures exist.

The medium-term impact on work organization remains uncertain.

The future under construction

In the near horizon, professional profiles could include "conversational competency in AI," comparable to linguistic ability.

In some consultancies, this trend is already observable: professionals with effective AI mastery access more projects. Not necessarily because they possess greater specialized knowledge, but because they formulate better questions.

It's advisable, however, to maintain perspective.

Real impact on productivity remains moderate. Structural transformation hasn't fully materialized. The future will probably belong to those who learn to dialogue with technology without diluting their professional judgment. To those who can discern when AI provides clarity and when it generates confusion.

Previous digital divides didn't resolve spontaneously. They required sustained public investment, deliberate policies and decades of educational effort.

This new dimension overlaps with previous ones, which remain open: access, cost and fundamental literacy. It won't be resolved through market mechanisms. It constitutes a collective challenge requiring public education, continuous training and realism in the face of technological enthusiasm.

The new digital divide doesn't only concern the ability to dialogue with AI, but also knowing when to do without it.

And that competency remains distinctly human.