written by
Thomas Flint

Technology in Government: AI, Convenience, and Behaviours

News 4 min read

While governments may facilitate the technological literacy and infrastructure of the populous that they govern, they also use it internally. This is separate from secret government projects: technology developed to achieve a tactical edge over their adversaries. Private companies develop various innocuous technologies that eventually make their way over to the government.

Governments continuously monitor this and work with technologies if they present a useful application. Deloitte identified nine ‘government transformation trends’ in their Government Trends 2020 report, which identifies explicitly private sector technologies and ideas that governments aim to use in various ways. We’ll review the first four of these in this article, and check the rest in part 2.

AI-augmented government

While artificial intelligence (AI) isn’t new, computers these days are incredibly fast, making the large number of calculations required for this less of a hurdle.

Governments worldwide have an interest in all of the applications for AI.

Currently, AI relieves the workload of government workers around the world. In Australia, the Department of Human Services utilises a chatbot AI to answer queries from case-processing officers. It can answer around 85% of these questions, which significantly reduces the strain on other staff.

However, AI in government may also accrue negative consequences. AI systems and staff will need to interact regularly, meaning that extensive planning will be required. There may also be a bit of reliance on this technology, leading to a less skilled or experienced government staff body. Privacy concerns raised include the use of AI for facial or voice recognition, allowing governments to track people more easily.

Government use could see something like this becoming commonplace. Though in this example, it isn't perfect.
Example of AI person detection. Image Source: Wikimedia Commons

Digital Citizens

Living as a fully digital citizen may have its perks. A fully digital government would be an incredible convenience, with all of its resources available to all of its users in the blink of an eye.

Modern governments are attempting to digitise most of their current systems, with many worldwide focusing on creating separate systems for their different branches. If you’re in Australia, government services are managed with a myGov account, which allows you to provide and receive information about your relationship with the government easily and quickly. Estonia’s government takes this to a further extreme.

Security seems to be a focus in this system, which is always a good thing.
Estonian Government Data Model. Image Source: 'Government as a data model': what I learned in Estonia

Estonians enter their information into a government-wide system to make their general experience more convenient. They enjoy voting, accessing all of their health records, filing their taxes, and almost everything else through this online service. However, this introduces vulnerabilities, as well. What if someone malicious got access to this government system, from the inside or outside?

Nudging with Behavioural Science

Controlling citizens will always hold government interest. While they aren’t at the level of 1984 yet, the application of behavioural science is on their radar. Deloitte kindly refers to this as “nudge thinking”, and defines it as the “use of choice architecture and other techniques to try and influence the choices people make”. Of course, governments around the globe have been testing these theories on their citizens for a long time.

Technology is one of the primary proxies for this king of nudging. Both the private and public sector have successfully used these theories to “harmlessly” manipulate their citizens.

New Mexico’s Department of Workforce Solutions has notably used behavioural nudging to prevent claimants from wrongly collecting unemployment benefits by nudging their behaviour. They were able to use simple pop-ups, tailored to the user, to “influence” mistaken users into entering the correct information.

However, anyone with access to the ability to influence the actions of a large number of people with minimal effort needs to be careful. Without the proper nuance, discretion, and forethought, this kind of manipulation could easily be detrimental to the residents of the world.

Ethics in AI and Big Data

Governments are always the centre of attention when ethics is involved. Who is responsible for this massive amount of data? This leads to the ethics of AI and their governance. Who is responsible for these systems?

Bias is also an essential factor to consider when using big data. The inclination of people in the real world can leak into the algorithms that they design. Deloitte provides the example of court systems that use algorithms to appraise defendants. How does anyone know if any given system has a bias?

Closing Thoughts: Technology in Government

All of these topics link existing technologies being developed to real-world effects and tries to guess at what motives world governments could have. If there’s something about these topics that haven’t been mentioned, feel free to respond with additional input.

The next article will cover the following topics: Predictive Analytics, Cloud, Safe Experimentation, Smart Governments, and Citizens as Customers. So, the focus is moving away from government motives and instead focuses on how citizens will be affected.

The Deloitte report on these topics is comprehensive and well-researched. Summarising them has not proven easy, so be sure to click here for the full report.

technology government