This year will be the “Year of AI,” said Founder and CEO of Check Point, Gil Shwed, during a recent keynote presentation.
Conversational AI-based chatbots have been around for years, but the release of ChatGPT made every other existent chatbot look like a lumbering dinosaur. Seen as an industry disruptor, ChatGPT also kicked off an AI arms race.
A mere two months after its debut, ChatGPT had acquired more than 30 million users, receiving roughly five million visits per day. The numbers show that it’s one of the fastest-growing software products ever produced.
The parent company of ChatGPT, known as OpenAI, recently signed a $10 billion deal with Microsoft, which has begun to incorporate the technology into its Bing search engine (although the search engine is having an existential crises).
While ChatGPT has plenty of room for improvement, its sudden, overwhelming popularity led Google’s management to declare a “code red.” For the past 20 years, Google’s search engine has operated as the gateway to the internet. ChatGPT may or may not shorten Google’s span of relevance.
One challenge with ChatGPT in its current state is that it has a way of synthesizing and reshaping information into new forms – without regard for whether or not the information is accurate.
OpenAI has attempted to increase the technology’s truthfulness by applying a technique known as reinforcement learning. In short, it leverages user ratings to help the system adapt.
Nonetheless, OpenAI warns those using ChatGPT that the tool “may occasionally generate incorrect information” and “produce harmful instructions or biased content.”
While tech companies can selectively implement guardrails, they cannot control precisely what ChatGPT and similar technologies produce. In addition to misinformation, conversational artificial intelligence tools can also be used to propagate malware.
“At Check Point Research, we can see the Russians trying to break through the geo-regional restrictions put in-place around ChatGPT,” said Pete Nicoletti, Field Chief Information Security Officer for Check Point.
Nicoletti notes that it remains unknown as to whether or not any zero-day exploits have yet appeared for ChatGPT. When an exploit does surface, the most likely vector of attack will be phishing.
Research has also shown that cyber criminals are beginning to create their very own bots that can infiltrate OpenAI’s GPT-3 API and that can manipulate its code.
After altering the code, the malware bot can develop malicious text and malicious scripts. Thus far, the bots have operated via Telegram, where they can launch a restriction-free, dark version of ChatGPT.
A growing number of businesses are restricting the use of ChatGPT among employees, citing data privacy concerns.
JP Morgan executives worry that data shared by major companies via ChatGPT could be used by OpenAI developers to enhance algorithms or in other unintended ways.
Last month, Amazon expressly told employees not to share any code or confidential information about the company with OpenAI’s chatbot.
Microsoft co-founder Bill Gates believes that ChatGPT is as significant an invention as the internet itself. In the past, artificial intelligence could read and write, but it could not understand the meaning of information. Now it can. “This will change our world,” said Gates.
If you would like more information about ChatGPT and its potential to change the world as we know it, please see CyberTalk.org’s past coverage.
Lastly, to receive more cutting-edge cyber security news, best practices and analyses, please sign up for the CyberTalk.org newsletter.
The post ChatGPT is on fire & the AI “gold rush” is here appeared first on CyberTalk.