India is among countries at the highest risk from cyber vulnerabilities caused by a new generation of GenAI technologies.
As the world becomes more digital, cybersecurity risks are growing rapidly. The pace of digitisation in the government, businesses and the wider society is faster than the ability to secure against new forms of cyber threats.
Recent reports and examples show that cyberattacks will become more likely in 2024 due to the rise of generative artificial intelligence (GenAI), a type of artificial intelligence that can create images, videos, audio and text from a dataset of previous examples.
'Generative AI tools like ChatGPT are changing how people work and raising unforeseen business risks. Ransomware, deepfakes, and other sophisticated cybersecurity issues are emerging and evolving at full speed,' said a recent report by risk-management firm Riskonnect.
'The vast majority of companies -- 93 per cent -- anticipate significant threats associated with generative AI. But just 9 per cent of companies say they're prepared to manage generative AI risks.'
India is among countries at the highest risk from cyber vulnerabilities caused by a new generation of GenAI technologies.
Only three per cent of organisations around the globe have a 'mature' level of readiness against modern cybersecurity risks, according to Cisco's 2024 Cybersecurity Readiness Index.
In India, only 4 per cent of organisations are 'mature'.
Researchers have found that large language models, which mine vast amounts of text in order to train GenAI, can be breached and undermined.
In an era when the reliance of GenAI is growing, organisations will have to be far more watchful.
In a recent McKinsey Global Survey, 40 per cent of respondents said their companies planned to increase overall investment on AI.
Nevertheless, few companies seem prepared for the widespread use of GenAI, and the business risks associated with the technology.
As many as 53 per cent of organisations acknowledged that GenAI could be a cybersecurity risk, but only 38 per cent are working to mitigate it.
When McKinsey asked them about the risks of adopting GenAI, few respondents said their companies were mitigating the most commonly cited risk with GenAI: Inaccuracy.
Business leaders and policymakers will have to prepare for a new era of cyber risks created by GenAI.
While inaccuracy, bias and flawed data are key issues in the use of GenAI, the security of its models could become a more serious risk.
Hackers can create malware using GenAI to infect systems.
'The threat from bad actors will only increase as they use generative AI to standardise and update their tactics, techniques, and procedures,' according to an assessment by Bain and Company.
'Generative AI-assisted dangers include strains of malware that self-evolve, creating variations to attack a specific target with a unique technique...undetectable by existing security measures.
'Only the most agile cybersecurity operations will stay ahead.'
India is at risk with its rapidly growing digital population. It reports the most AI-powered voice scams, according to a report by Asian risk firm MitKat Advisory.
'The interconnectedness and autonomy of AI systems makes them susceptible to exploitation, raising concerns about the potential for malicious actors to compromise security, manipulate algorithms, or launch sophisticated cyberattacks,' said the report.
'Cybersecurity is disruptive and game changing. Security and privacy risks from new and emerging technologies like GenAI are rising.
'Proliferation of GenAI tools makes disinformation easy to produce and disseminate at scale. Most cyberattacks are machine-led, so defences have (got) to be," said retired Colonel S M Kumar, co-founder of MitKat Advisory.
A concerted effort will be required to strengthen organisations' ability to counter cyber threats from new technologies.
Most are struggling with existing systems and have not given adequate attention to sophisticated cyberattacks. New priorities and tools will be needed to fight new-age hacking.
Feature Presentation: Ashish Narsale/Rediff.com