Key Examples of AI in the Public Sector

A roundup of some of the biggest AI programs coming from governments around the world today.

Add bookmark

Artificial Intelligence use by government has had a bad rap over the years. From badly designed algorithms denying citizens the help they need, to biased datasets leading to unfair outcomes, the scale of the power of AI is clear enough to see when it is found to be derailing lives.

However the fact that governmental systems go wrong is nothing new and certainly not a problem found only in AI. The fact of the matter is that increasingly governments are looking to do their AI systems right, particularly following such haphazard attempts in the past.

Here is a roundup of some of the more positive examples of Public Sector AI.

Belgium CitizenLab

Governments are increasingly working to develop citizen-driven policies and services. To do this, it governments are increasingly looking at ways in which to engage with its citizenry in order to understand their perspectives, opinions and needs.

Digital participation platforms are important tools for achieving this and improving government responsiveness. However, analyzing the high volumes of citizen input collected on these platforms is extremely time-consuming and daunting for government officials, and hinders them from uncovering valuable inputs. Setting up a digital participation platform, therefore, is not enough: the process of data analysis must be more accessible in order to enable civil servants to tap into collective intelligence and make better-informed decisions.

Belgium’s CitizenLab is a civic technology company that aims to empower civil servants by doing just that. The country is using machine-learning augmented processes to help them analyze citizen input, make better decisions and collaborate more efficiently internally.

CitizenLab has developed a public participation platform that uses Natural Language Processing (NLP) and Machine Learning techniques to help civil servants easily process thousands of citizen contributions and use these insights efficiently in decision-making. The dashboards on the platform can classify ideas, highlight emerging topics, summarize trends, and cluster similar contributions by theme, demographic trait or location.

Transport Canada

Transport Canada is the department responsible for the Government of Canada’s transportation policies and programmes. Each year, Transport Canada’s Pre-load Air Cargo Targeting (PACT) team receives nearly one million pre-load air cargo records, containing information such as shipper name and address, consignee name and address, weight and piece count. Each record may include anywhere from 10-100 fields, depending on the air carrier and business model of the shipper. One employee, working at an unrealistic rate of one record per minute, would not even have enough time to review 10 percent.

To date, very few governments have the dedicated resources to scan air cargo records for risk before loading, and of those that do, none use AI. Transport Canada decided to improve on this situation and enhance the safety and security of air cargo transportation.

To do this, Transport Canada is adopting AI to enhance processes and procedures to free up employees to work on more highly valued tasks. The department started by exploring the use of AI for risk-based reviews of air cargo records, which could be scaled to other areas if successful. To achieve this, the department assembled a multi-disciplinary team consisting of members of PACT and the Digital Services and Transformation divisions of the department, one of Canada’s Free Agents and partners from an external IT firm with expertise in AI.

For the pilot, Transport Canada attempted to answer two questions related to its performance: “Can AI improve our ability to conduct risk-based oversight?” and “How can we improve effectiveness and efficiency when assessing risk in air cargo shipments?”.

To answer these questions, the innovation team developed and implemented a two-step approach in 2018. As a first step, they used data from previous air cargo records and manual risk assessments to explore unsupervised and supervised approaches. Using the supervised approach, the team tried to understand the relationship between the inputs (cargo records) and the outcome (i.e. did this cargo record indicate a greater level of risk, as based on previous manual risk assessments?). Using unsupervised learning, the team sought to understand the relationships between all cargo inputs to identity rare or unusual shipments, which could be indicative of risk.

Second, the team developed a proof of concept to test natural language processing (NLP) on a different subset of data. The goal was being able to process air cargo records, and automatically tag a cargo record with a risk indicator based on the contents of the “free text” fields in the air cargo records and other structured fields. This was completed in the first quarter of 2018 and showed that NLP could successfully sort cargo data into meaningful categories in real time.

United States Federal Data Strategy and Roadmap

It isn’t only what is being enacted in Public Sector AI that is important however. A lot of governments around the world are still laying the groundwork to ensure that whatever systems are implemented going forward work.

The United States Government is one of the largest entities in the world, which can make it challenging to manage data as an asset in a consistent manner at an enterprise level. Several new laws have been passed recently that seek to address this situation, which could be complemented by uniform policy guidance in the executive branch to help ensure consistent implementation. In addition to new laws, the US President has identified “Leveraging Data as a Strategic Asset” as a presidential priority area and a Cross-Agency Priority Goal (CAP Goal), which necessitates a systems approach to data in government.

On 4th June 2019, the White House Office of Management and Budget (OMB) launched the Federal Data Strategy (Strategy) as a government-wide framework to help promote consistency and quality in data infrastructure, governance, actions, protection, and security. The Strategy was created by a cross-government team and represents a ten-year vision for how the government will “accelerate the use of data to support the foundations of democracy, deliver on mission, serve the public, and steward resources while protecting security, privacy and confidentiality”. The Strategy consists of 10 principles organized around three categories that serve as motivational guidelines for government agencies.

United States Sensor Technology

A multitude of U.S. states have been harnessing the data accrued via the Internet of Things (IoT) in order to enhance public sector services.

In Clifton Park, N.Y the town is installing long-lasting LED lights and smart city technologies with the hopes of saving $4.5 million over 20 years. The updated street lights will also be equipped with GIS mapping, which is expected to allow for more efficient maintenance of the lights and the ability to dim or brighten specific lights. The lights will also include smart city controls that will, in the future, help the town to monitor air quality, traffic and noise, as well as enhance safety for pedestrians and cyclists.

In Chicago, the city plans to deploy hundreds of sensors on streetlight poles across the downtown area and in residential neighborhoods to monitor temperature, humidity, wind, noise, air quality and traffic from cars, pedestrians and bicycles.

While in San Mateo, California, sensors are being used to track parking occupancy and duration in real time, and report data through the cloud. This helps San Mateo make better decisions about parking policies, rates, and time limits, making more efficient use of the downtown parking system. The city also uses connected sensors and house meters to help structural engineering teams test the integrity of bridges and tunnels, and better prioritize repair and rebuilding.

The European Union AI Act

In April 2019, the EC published Ethics Guidelines for Trustworthy AI171 to provide guidance on how to design and implement AI systems in an ethical and trustworthy way. The Guidelines were created by the EC’s High-Level Expert Group on Artificial Intelligence (AI HLEG), which consists of 52 AI experts from academia, civil society and industry. A core task of the AI HLEG has been to propose AI ethics guidelines that consider issues such as fairness, safety, transparency, the future of work, democracy, privacy and personal data protection, dignity and non-discrimination, among others.

The Guidelines maintain that trustworthy AI has three components that work in harmony:

  • Lawful. The AI should comply with all applicable laws and regulations.
  • Ethical. The AI should adhere to ethical principles and values.
  • Robust. The AI should avoid unintentional harm from both a technical and social perspective.

Read our in-depth article about what is in the EU’s AI act here.


RECOMMENDED