30.8 C
Lagos
Friday, November 22, 2024

AI: Will Nigeria be there when the world meets to standardise, discuss safety?

Must read

Standardising regulations, safety of Artificial Intelligence
   
By Sonny Aragba-Akpore

When the International Telecommunications Union (ITU) held its Artificial Intelligence (AI) for Good Global Summit in Geneva,Switzerland ,July 6-7, 2023, it was specifically to drum up standardisation, safety and regulatory processes for Artificial Intelligence (AI).
  
Follow up summits in parts of the globe including the one held in Dubai,United Arab Emirates ( UAE) took place thus putting together what is likely to be the direction for standardisation of AI.

By May 29,this year,when  Nigeria
marks the first year of a new regime and speeches are being made at the Eagle Square or somewhere else in the country,global technology leaders will converge in Geneva,but Nigeria is not likely going to be on their minds, as discussions will focus on AI governance that will explore the surge in global efforts to craft AI policy, regulation, and governance frameworks.

“The AI Governance Day will bring together representatives of governments, companies, academia, civil society, and UN agencies and this  aims to forge pathways to transform dialogue around AI governance into impactful action” according to ITU documents.

On October 30,2023,United States  President Joe Biden signed an Executive Order(EO) requiring that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  
In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

“Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.”

“Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.”

“Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

“Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.”
“Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.”

And in the United Kingdom (UK), an Office for Artificial Intelligence has been established and is now part of the AI Policy Directorate in the Department for Science, Innovation and Technology (DSIT).

On November 15,2023, the UK government released £17million funding for scholarships on AI and data science conversion courses to  help underrepresented groups.
Companies are encouraged to contribute to funding to boost skills pipeline for future workforce.
The £17 million in government funding will create more scholarships for AI and data science conversion courses, helping the young including women, black people, and people with disabilities and people from disadvantaged socioeconomic backgrounds join the UK’s world-leading Artificial Intelligence (AI) industry.

Together, government and industry funding will create two thousand scholarships for masters AI and data science conversion courses, each worth £10,000. The programme is enabling graduates to do further study courses in the field even if their undergraduate course is not directly related, creating a new generation of experts in data science and AI.

Meanwhile, from May 30 to 31, the Global leaders and innovators in artificial intelligence (AI) will join the humanitarian community at the AI for Good Global Summit 2024 in Geneva, Switzerland to explore how new technology can drive sustainable development.

This year’s edition of the summit event will showcase innovations in generative AI, robotics,​ and brain-machine interfaces that can accelerate progress in areas such as climate action, accessibility, health, and disaster response.

Summit speakers, including some of the world’s foremost AI luminaries, will explore the latest breakthroughs in AI and examine actions to ensure that AI works to humanity’s benefit.

The summit connects AI innovators with public and private-sector decision-makers to help scale up AI solutions globally.

ITU, the UN specialized agency for information and communication technologies, organises the yearly AI for Good Global Summit together with 40 partner UN agencies. The event is co-convened by the Government of Switzerland. 

In addition to talks by AI thought leaders, this year’s summit will host machine learning masterclasses, curated by experts for experts, covering topics from deepfakes and climate change to brain-machine interfaces, AI for public services, explainable AI, and machine learning in communication networks.

Start-ups, young people and creatives will demonstrate their ideas at the AI for Good Innovation Factory Grand Finale, Robotics for Good Youth Challenge, and Canvas of the Future art contest.

The summit’s exhibition space will feature an array of cutting-edge demos, including AI for accessibility, collective drone swarms, bio-inspired rescue robots, a RoboCup robot football tournament, performance-boosting exoskeletons, and AI-inspired art.

Exhibition highlights will include demos of brain-machine interfaces – an AI advancement that promises to open new frontiers for neurotechnology. A press conference on brain-machine interfaces will highlight new technologies enabling mind-controlled movement and communication for persons with disabilities, offering insights on how progress in the field could impact the future of human performance, mental health, and wellbeing.

● Aragba-Akpore, an analyst on tech trends, lives in Abuja, and sent this via WhatsApp.

- Advertisement -spot_img

More articles

Related articles