This is the second in a series of articles on the Open Government Partnership Summit 2025, in Vitoria-Gasteiz, Spain, in October. Technical Writer/Analyst Andrew Nette was one of the Link Digital staff present at the event.

The topic that most dominated discussions at the Open Government Partnership (OGP) Summit 2025 was AI.

Not surprisingly, given the diversity of those present – everyone from government officials of major developed countries to civil society representatives from the Global South – the conversation veered across a range of viewpoints. From AI as a major threat to democracy and open government, to being a possible way for governments to work better and provide improved services for citizens.

It’s a topic that is of core interest to Link Digital, given the increasing implications of AI for the work we do globally in open data and digital public infrastructure. But it is also important far more generally in terms of what AI means in terms of what a sustainable system of social cohesion might mean for democracies going forward.

The following article presents a snapshot of the main discussions at the OGP around AI use in government services.

What exactly are we debating?

AI is a series of fast evolving technologies that comprise everything from very narrow, task specific forms to deep learning models that allow AI applications to learn how to perform tasks that require human intelligence level abilities and decision making capabilities. But mirroring debate around the tech more generally, most discussion at the OGP tended to obscure the differences between the technology.

As Adrià Mercader, Link Digital’s Senior Solutions Architect, who was also at the OGP Summit, observed:

There is constant bombardment of AI discourse around potential uses and major disruptions AI will cause, and there are massive economic interests behind this hype. This makes it very difficult to assess the technology purely on its merits, first because it is a fairly technical field that is evolving very fast, but also because it can mean very different things. People tend to associate AI with large language models like ChatGPT. But AI also encompasses many other technologies like Image Recognition, Semantic Search, Natural Language Processing etc. In this context it is really easy to overestimate the capabilities or accuracy of current AI technologies while at the same time minimize the risks or biases involved. There is a massive need to increase literacy in this field, of course alongside data and more wider technological literacy.

A related point was made during the Summit by Patricio Del Boca,Tech Lead at Open Knowledge Foundation. He noted there have been waves of AI, starting in earnest around 2103 with the advent of basic machine learning models, followed by a wave of predictive AI and now generative AI.

AI is just like a tool and how we have used it has shifted over time and will continue to do so.

But the OGP is not a forum for technological discussion but one that focuses on the political and social uses and impacts of digital technologies like AI. As part of this, participants tried to tackle the contrast between AI which is fast moving and fluid, and government structures that can be rigid and slow to respond to change.

Governments adopting AI at a rapid rate

What was clear from the Summit is governments in both developed countries and in the Global South, are already increasingly utilising AI in many of its forms. This is happening on two levels: civil servants using the technology in their individual work; and governments adopting AI to automate/semi-automate government processes and public facing services.

Ecuador is one of many Global South countries racing to adopt AI to close the gap between citizens and government services. Juan Francois Román, Ecuador’s Under-Secretary of Government Management, made an explicit link between democratic crisis and lack of responsive services in the decision to incorporate an AI-powered chatbot on the national government’s open data portal. He argued this will enable citizens to ask a question and get a faster, more reliable response. He also hoped that it would help open the data hosted on the portal beyond data experts and to a wider section of the public.

Government is slow, and citizens want quick, almost real time answers, to their questions and will shop around for the data they need to meet needs and exercise rights. Creating a digital assistant could be a good tool to help citizens and have a real time conversation with citizens.

Professor Beth Noveck from Illinois’s Northeastern University, noted the extensive use of AI by state governments in the US; everything from helping to simplify law drafting, and monitoring state government procurement, to being more responsive to questions from citizens. Like Ecuador’s Román, she also drew a link between AI adoption and strengthening democracy.

It [AI] is giving us the tools to accelerate what we have always wanted to do. The key question is who do we govern with AI? It can make government work better for people and is a way of pushing back against anti-democratic views.

More pressing for Noveck is how to train civil servants to use AI more efficiently and responsively. This was part of a much larger question debated at the Summit: what are the guardrails for AI’s use and how do governments monitor what they do with the technologies to make sure it is transparent and citizen centric?

Estonia is taking a multi-stakeholder approach, with the central government working with the education department and private enterprise to roll out an AI teaching aid that will be implemented across the school system.

AI is here in our everyday lives and governments must decide how to make the best use of it,

said Estonian Secretary of State Keit Kasemets.

The reality is that we need to speed up and make decisions more quickly and AI can help us do this.

The need for algorithmic transparency

One voice at the Summit that was highly critical of the way AI is being adopted by governments was María Paz Hermosilla, Director of GobLab at Chile’s Universidad Adolfo Ibáñez. The use of AI by most governments, she stressed, is opaque and lacks any kind of transparency in algorithmic transparency or accountability. Most systems use personal data, but we do not know any details.

In low risk settings I welcome AI intervention. In high risk areas there is a need for more testing. Governments should go for low risk fruit, and take more time in higher risk areas like health. We need much more testing before these things become laws.

GobLab has mapped approximately 700 instances in which AI has been used in government service provision in Latin America alone, most of which have virtually no public data about them or standards of algorithmic transparency. Hermosilla argued there was a pressing need for algorithmic registers that go beyond the actual description and code of the algorithm being used in a particular government service, to answering questions such:

  • Why the AI algorithm in question was implemented in the first place
  • Does it use personal data and if so what
  • How is the system audited
  • What is the implementing agency and who is the actual human point of contact to learn more.

Hermosilla argued these registers need to be publicly available and legally enforceable.

The use of AI by governments needs to be routinely monitored just like other government services, particularly in relation to important services.

While several countries present at the Summit had undertaken such initiatives, the gold standard is Canada. According to Dominic Rochon, Chief Information Officer for the Treasury Board of Canada Secretariat, any Canadian agency that wants to adopt an automated decision system using AI needs to perform an Algorithm Impact Assessment, not only during the design phase but also during the implementation and once deployed. Introduced in 2019, this assessment tool has already been reviewed four times.

The challenge is how to achieve the right balance between adoption and responsible use.” Every agency must have an AI team, and all assessments are publicly available on an open data portal. Rochon claimed that while the process comes with challenges, including the complexity of the assessment process and the burden it puts on teams, it has brought an increase of accountability and collaboration around AI use.

But, of course, not every government has Canada’s resources. Smaller and less developed countries especially face constraints monitoring AI use, including the lack of a specific administrative function in their government to regulate AI usage.

Language is another consideration. Government representatives from Estonia and Armenia reported making a major effort to create local language AI models. Leonida Mutuka, AI Research Lead for the Kenyan-based Local Development Research Institute, stressed that Africa needs local talent who are trained in how to use it and work with AI, including how to deal effectively with issues like bias and ensure AI effectively reflects African context.

Conclusion

While the OGP Summit 2025 provided no neat conclusion to the issues around governments’ increasing use of AI, there were some takeaways from the meeting that struck me as important.

First, AI must be used to make governments better, not just busier and more efficient. Several speakers stressed that there is no point using AI as part of digitising services that already don’t work. The point is to use AI to build improved services that the public wants. While AI is good at doing certain things at scale and might help with repetitive tasks, it won’t be able to help with underlying issues in government services like underfunding, lack of transparency, etc. Citizens might be understandably wary of AI technology applied to public services so again, full transparency is critical here.

Second, while there was a lot of discussion on the ongoing importance of open data in developing transparent and citizen centric uses of AI in government, it felt like only a few voices stressed the ongoing importance of a role for the OGP in continuing to support governments to invest in building their own tools to open up more data and make it more interoperable, etc. As Ricardo Miron Torres, Chief Technical Officer at the Digital Public Goods Alliance, put it:

OGP has made considerable progress on open data for the last 10-15 years. We should not throw out these principles in the rush to adopt AI.

The discussions around open data at the OGP will be the subject of my third and final report from the 2025 Summit.

Also read: A look at some of the key debates around Digital Public Infrastructure at the Open Government Partnership Summit 2025