Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
USA

Imagine your child asking for money.Except they aren’t – it’s an AI scam | James Wise

T.In his year, I was sent a link to a video of myself enthusiastically explaining why I invested in a new technology company. In the video, I spoke enthusiastically about my great faith in the company’s leadership and encouraged others to try the service. The problem is that I have never met the company or used the product.

He looked and sounded like me, down to his fading Mantian accent. But it wasn’t. It was an AI-generated fake that was used to pitch a business and scare me into investing in the company. Far from being impressed, I was concerned about the myriad potential malicious uses of these new tools.

From data breaches to phishing attacks where scammers trick people into sharing their passwords or transferring money to unknown accounts, cybercrime is already one of the most commonly experienced forms of egregious crime in the UK. One. By 2022, the UK will best numbers Number of victims of cybercrime per million internet users worldwide. In part, we are also victims of our digital success. Britons are early adopters of new technologies such as online shopping and mobile banking, and cybercriminals seek to exploit these technologies. As AI becomes more sophisticated, these criminals are being offered even more ways to trick us into believing they are someone else.

Many of the most impressive advances in human imitation are being developed right next to us. A company called Eleven Labs has built and released a tool that can reproduce almost perfectly any accent in any language. Once you visit the website, you can have the pre-trained model read out statements using the fast-talking New Yorker “Sam” or the softer Midwestern-sounding “Bella”.

Synthesia, a London-based company, goes further. Its technology allows customers to create new sales representatives. You can generate photorealistic videos of synthetically generated people speaking in any language, pitching your products, or providing customer support. These videos are incredibly lifelike, but the person doesn’t exist.

Eleven Labs has very clear rules regarding the use and abuse of their technology. They clearly state that “Audio cannot be cloned for abusive purposes such as fraud, discrimination, hate speech, or any form of online abuse.” But less ethical companies are launching similar products at a similar pace.

It’s pretty ironic that one of AI’s first major uses is to imitate humans, for better or worse. Alan Turing, the godfather of modern computing, Turing testHe originally called it an “imitation game” and appreciated AI’s ability to trick humans into believing it was real. Passing this test soon made him a benchmark for AI developer success. Now that anyone can create a synthetic person with the click of a button, we need an anti-Turing test to prove who is real and what was generated.

When a video call comes in from a teen asking for emergency funds for a gap year, how do you know it’s from them? And if you’re ever unsure if the money really came from your boss, what should you do? These questions are no longer hypothetical.

Fortunately, some services already exist that tackle this challenge.as soon as Chat GPT AI detection tools such as Originality.ai were released to tell teachers the possibility that the essay was actually written by AI while clever students adopted it to complete their homework. Similar solutions are under development to assess whether a video is authentic, but this relies on pixel-level inaccuracies that still affect even the most sophisticated AI tools.

And new initiatives are starting. Synthesia Content Reliability Initiative, which was launched in 2019 to give users more insight into where the content they receive comes from and how it was created. More controversially, but perhaps inevitable, if you want to distinguish a spouse from a fake, check for a national form of digital ID, i.e. whether the person you are talking to is a real human or a bot. Being a method will almost certainly be necessary.

In the meantime, more efforts are needed to raise public awareness of the increasing sophistication of cybercrime and what is currently possible. While we wait for government action and regulation to be formulated, there is a more immediate risk of 1,000 AI tricksters exacerbating the UK’s existing cyberfraud problem.

Summarize this content to 100 words T.In his year, I was sent a link to a video of myself enthusiastically explaining why I invested in a new technology company. In the video, I spoke enthusiastically about my great faith in the company’s leadership and encouraged others to try the service. The problem is that I have never met the company or used the product.He looked and sounded like me, down to his fading Mantian accent. But it wasn’t. It was an AI-generated fake that was used to pitch a business and scare me into investing in the company. Far from being impressed, I was concerned about the myriad potential malicious uses of these new tools.From data breaches to phishing attacks where scammers trick people into sharing their passwords or transferring money to unknown accounts, cybercrime is already one of the most commonly experienced forms of egregious crime in the UK. One. By 2022, the UK will best numbers Number of victims of cybercrime per million internet users worldwide. In part, we are also victims of our digital success. Britons are early adopters of new technologies such as online shopping and mobile banking, and cybercriminals seek to exploit these technologies. As AI becomes more sophisticated, these criminals are being offered even more ways to trick us into believing they are someone else.Many of the most impressive advances in human imitation are being developed right next to us. A company called Eleven Labs has built and released a tool that can reproduce almost perfectly any accent in any language. Once you visit the website, you can have the pre-trained model read out statements using the fast-talking New Yorker “Sam” or the softer Midwestern-sounding “Bella”.Synthesia, a London-based company, goes further. Its technology allows customers to create new sales representatives. You can generate photorealistic videos of synthetically generated people speaking in any language, pitching your products, or providing customer support. These videos are incredibly lifelike, but the person doesn’t exist.Eleven Labs has very clear rules regarding the use and abuse of their technology. They clearly state that “Audio cannot be cloned for abusive purposes such as fraud, discrimination, hate speech, or any form of online abuse.” But less ethical companies are launching similar products at a similar pace.It’s pretty ironic that one of AI’s first major uses is to imitate humans, for better or worse. Alan Turing, the godfather of modern computing, Turing testHe originally called it an “imitation game” and appreciated AI’s ability to trick humans into believing it was real. Passing this test soon made him a benchmark for AI developer success. Now that anyone can create a synthetic person with the click of a button, we need an anti-Turing test to prove who is real and what was generated.When a video call comes in from a teen asking for emergency funds for a gap year, how do you know it’s from them? And if you’re ever unsure if the money really came from your boss, what should you do? These questions are no longer hypothetical.Fortunately, some services already exist that tackle this challenge.as soon as Chat GPT AI detection tools such as Originality.ai were released to tell teachers the possibility that the essay was actually written by AI while clever students adopted it to complete their homework. Similar solutions are under development to assess whether a video is authentic, but this relies on pixel-level inaccuracies that still affect even the most sophisticated AI tools.And new initiatives are starting. Synthesia Content Reliability Initiative, which was launched in 2019 to give users more insight into where the content they receive comes from and how it was created. More controversially, but perhaps inevitable, if you want to distinguish a spouse from a fake, check for a national form of digital ID, i.e. whether the person you are talking to is a real human or a bot. Being a method will almost certainly be necessary.In the meantime, more efforts are needed to raise public awareness of the increasing sophistication of cybercrime and what is currently possible. While we wait for government action and regulation to be formulated, there is a more immediate risk of 1,000 AI tricksters exacerbating the UK’s existing cyberfraud problem.
https://www.theguardian.com/commentisfree/2023/jun/30/money-ai-scam-fraud-fraudsters-trick Imagine your child asking for money.Except they aren’t – it’s an AI scam | James Wise

Back to top button