AI as Social Infrastructure
Artificial intelligence is often described in terms of productivity, efficiency, and innovation. Such language, while not incorrect, is incomplete. Within ILFAT, the Integrated Leadership Forum Against Trafficking, a survivor-led forum, AI is encountered in a very different register. It appears less as an instrument of optimisation and more as a quiet, improvised support system, woven into lives that have long been shaped by unequal access to information, institutions, and opportunity.
ILFAT brings together individuals who have experienced structural inequities in tangible ways. Many members come from socio-economic contexts where access to quality education, legal awareness, healthcare information, and institutional support has been inconsistent or mediated through fragile systems. For survivors of trafficking and exploitation, these gaps are not incidental. They shape life trajectories, limit choices, and often determine whether one can seek redress, continue education, or access entitlements with dignity.
It is within this context that the conversation on AI becomes meaningful.
When an ILFAT member such as Suhana turns to AI to support her studies in social work, it is not an isolated anecdote of digital adoption. It is a response to a prior absence. The lack of books, limited institutional support, and barriers to formal learning create a condition where AI becomes a substitute infrastructure. It enables continuity where systems have failed to provide it.
Similarly, when Supriya speaks of AI as a “knowledge partner,” her words carry a certain quiet weight. They reflect not only engagement, but a form of intellectual companionship shaped by necessity. For individuals who have often navigated knowledge systems from the margins, the ability to ask questions and receive immediate, structured responses alters the experience of learning itself.
Among younger members such as Firoja and Suhana, the use of AI for knowledge gathering emerges with a certain ease. Yet this ease should not be mistaken for privilege. It is, in many ways, adaptive. It reflects a generation learning to work around constraints, making use of whatever tools are available to bridge persistent gaps in access.
At the same time, there is a discernible caution that accompanies this engagement. Members consciously avoid sharing personal information with AI systems. This restraint is not always articulated in technical terms, yet it is informed by lived experiences of vulnerability. In a space shaped by histories of control and exploitation, the question of who holds one’s information is neither abstract nor trivial. Trust is measured, often withheld.
The discussion of AI within ILFAT cannot be separated from the question of inequality. Access to devices, stable internet, language proficiency, and educational exposure continues to determine who can use AI effectively. Yet the picture is not static.
For a member like Pradip, the emergence of AI in his village suggests a subtle shift. People are beginning to engage with it, even if unevenly. Khemlal’s observations of anganwadi workers using AI tools further complicate conventional assumptions about who participates in technological change. These accounts point to a gradual diffusion of AI into spaces that have historically been excluded from such developments.
Still, access alone does not resolve inequity. It often rearranges it.
Those with greater literacy, stronger command over language, and familiarity with digital systems can extract more value from AI. Others may rely on it more heavily because alternatives are limited, yet remain less equipped to assess accuracy, bias, or risk. In such circumstances, dependence can coexist with vulnerability.
Khemlal’s accounts of attempted misuse bring this tension into sharper relief. Instances where AI was reportedly used to plan an ATM robbery or think through a violent act underline a difficult truth. AI extends capacity, but it does not distinguish between intentions. Its use reflects the conditions in which it is embedded.
Concerns around safety are also deeply personal. Mehrunnisa’s unease regarding her daughter’s interaction with AI speaks to a broader anxiety about children navigating digital spaces without sufficient guidance. The possibility of exposure, misinformation, or exploitation creates a sense of watchfulness. Apurva’s emphasis on sensitisation echoes this concern, suggesting that access must be accompanied by understanding, especially for younger users.
These reflections place AI within a social and relational frame. It influences not only how people access information, but how they communicate, learn, and form judgments. Some members note that while AI simplifies interaction, it may also alter its texture. There is a faint but persistent concern that ease may come at the cost of depth.
Trust, in this landscape, remains tentative. Members use AI because it is useful, sometimes indispensable. Yet concerns about data privacy, misuse, and lack of accountability endure. There is also a recognition of limited agency. Many engage with these systems without feeling able to question them, challenge them, or seek redress when something goes wrong.
It is within this context that AI literacy takes on a different meaning.
Within ILFAT, AI literacy is not understood as technical proficiency alone. It includes the ability to frame questions with clarity, to recognise the limits of generated responses, to assess reliability, and to exercise discretion in sharing information. It involves knowing when AI can assist and when it cannot be relied upon. Without these capacities, access may offer the appearance of empowerment while quietly deepening exposure to risk.
At the same time, there is no inclination to dismiss AI. Its value is evident when it meets immediate needs. It supports learning, enables access to information, assists with drafting and communication, and helps individuals prepare for engagements with institutions that may otherwise feel distant or intimidating.
In this sense, AI is already functioning as an informal layer of support across domains such as education, language access, health information, and economic navigation. It often becomes the first place where questions are asked, especially when other avenues feel inaccessible.
Yet, ILFAT’s narratives draw a careful boundary. In areas central to its work, including mental health, justice, violence, dignity, and participation, AI cannot replace relational systems of care and accountability. It can assist in preparation, but it cannot accompany. It can inform, but it cannot take responsibility.
What then emerges is not a settled position, but an evolving inquiry.
If AI is becoming part of everyday survival and decision-making, how should it be engaged with responsibly? How can its benefits be extended without reproducing the exclusions that already exist. What forms of support are needed to ensure that individuals can use AI safely and meaningfully? And how might institutions such as ILFAT begin to reflect on their own engagement with AI, not only at the level of individual use, but within their broader work and processes.
These questions do not yet have definitive answers.
What is visible, however, is that the conversation has begun. It is no longer limited to whether AI is useful. It has moved toward how it is shaping access, how it intersects with vulnerability, and how it might be approached with care.
There is, perhaps, a certain restraint in ILFAT’s engagement with AI. It is neither uncritical enthusiasm nor resistance. It is something quieter, more deliberate. A willingness to use what is useful, to question what is uncertain, and to remain attentive to what may follow.
The story, then, is not one of conclusion.
It is one of beginning.