Sora Launched: How to Counter Fraud with Big Models?

At the beginning of the Year of the Dragon, Sora came out of nowhere.

 

On February 15 (U.S. local time), the artificial intelligence research company OpenAI officially released to the public the artificial intelligence Vincennes video model: Sora. more than a year ago, the company released a revolutionary product ChatGPT impact is continuing, and Sora once released immediately stirred up the global AI field, once again becoming the focus of public opinion.

 

According to OpenAI's technical report, Sora is a video generation model that is a simulator of the world, capable of understanding and simulating the foundations of real-world models, understanding how objects exist in the physical world, creating videos of up to 60 seconds based on textual prompts from the user, and deeply simulating the real physical world in terms of video content, generating complex scenes with multiple characters that contain specific motions. The video content can deeply simulate the real physical world, generating complex scenes with multiple characters and specific movements.

 

The large model of text-generated video is not unique to OpenAI, Runway, Pika, Stable Video Diffusion, and a large number of AI startups in this track, but Sora was released to cause such a big sensation and buzz because of its breakthroughs to meet the vision of a new generation of productivity tools for the AI field, from the "The length of the generated video", "lens switching and screen smoothness", "the use of the basic model architecture" are far beyond the level of existing similar products.

 

The OpenAI website has been updated with dozens of video DEMOs. In these example videos, Sora not only accurately presents details, but also understands the existence of objects in the physical world and generates characters with rich emotions.

 

Sora's emergence can be foreseen in the future will have a huge impact on film and television, games, advertising, and other content industries, but also promotes the development of AI servers, AI chips and optical communications, and other upstream industries, but while enjoying the positive impact, there are hidden dangers that we also need to be vigilant.

 

Before the introduction of Sora, there were many cases of fraudulent use of AI fake videos, indicating the risk that AI-generated videos may be abused for illegal purposes. The future popularization of Sora will further reduce the threshold of forged videos, and the fraudulent behavior carried out by the depth of forgery may be more widespread.

 

AI technology upgraded again, fraud detection triple hurdle: identification, detection, traceability

The emergence of Sora means that the future use of AI-upgraded technology to implement fraud behavior threshold will become lower, and anti-fraud products and services technology companies in the face of more powerful fraud attacks, will face more serious challenges. Among them, the biggest change can be expected to come from the "deep forgeries". The danger of "deep fakes" lies not only in the generation of fake videos and images, but also in the entire fraud ecosystem they enable an intricate web of bots, fake accounts, and anonymizing services all designed to produce, amplify, and distribute fabricated information and content that is difficult to identify, detect, and trace back. The problem.

 

In addition to "deep fakes," AI-generated videos that contain illegal or ethically risky content, synthesized identities, dissemination of false information, and copyright and privacy infringement are all problems that Sora may pose in the future. To prevent and combat fraud caused by these issues, on the one hand, it is necessary to identify and detect fake videos, images, and information (effectively screening out fake and forged content); on the other hand, it is necessary to implement channels and platform networks for identifying and detecting fraudulent behaviors such as "deep forgery" (improving the security of digital accounts in various aspects), It is also necessary to prevent the misuse of fake certificates triggered by similar AI-generating tools.

 

In response to public anxiety about counterfeiting following the launch of Sora, OpenAI said that when it does launch a product for the public, it will ensure that the generated video contains source metadata and introduce tools to detect the authenticity of the video, and promised to implement security measures before using Sora in its products, including countermeasures against the model by "experts in the fields of misinformation, hate content and bias". These include adversarial testing of models to assess harm or risk by "experts in the fields of misinformation, hate content, and bias"; and verification and rejection of text input prompts that contain extreme violence, sexual content, hateful imagery, and IPs of other people, among other things. Even so, OpenAI admits that despite extensive research and testing, "we still can't predict all the beneficial ways people will use our technology and all the ways they will misuse it."

 

With the improvement of technology, the anti-fraud auditing of AI-generated videos has ushered in new challenges. ADVANCE.AI, as a global leading artificial intelligence technology enterprise, is committed to providing neutral, reliable, and compliant digital identity verification and risk management solutions for global financial institutions and Internet enterprises. ADVANCE.AI has been paying close attention to the technological progress in the field of "deep forgery" and "synthetic identity" fraud and has been continuously improving the strength of its products. AI has launched the "Star Verification" digital identity verification and risk management solution, which includes a newly upgraded "3D in vivo detection function" that guides the user to move away from/near the camera, performs activity detection, and captures both close-up and faraway images of the user to ensure that the user's identity is not being falsified. By guiding the user to move/approach the camera for activity detection, it captures the user's near-view and far-view images, ensures real-time user image capture, improves the accuracy of detection of reshooting, and in-depth forgery fraud, and efficiently prevents Al face-switching, face-synthesis, and other "in-depth forgery" identity fraud attacks. The "Fake ID Detection" product, which also prevents identity forgery, ensures that it can accurately and quickly identify the six major types of document forgery through fully automated workflows and intelligent machine learning models to ensure the authenticity of user-provided identity documents, prevent fraud, and protect enterprises and organizations from the risk of identity document forgery. Impact.

 

Since the release of Sora, ADVANCE.AI has been paying great attention to the future escalation of fraud brought about by the corresponding anti-fraud solution products have joined the upgrade plan, the ability, and confidence to cope with the ever-changing changes in the field of fraud, and continue to protect the globalization of enterprises. For the release of the Sora, we interviewed ADVANCE.AI technology leader Fan Huaiyu and ADVANCE.AI artificial intelligence product leader Yu Jiangbo, the two shared the Sora in the field of AI-generated fraud to determine the impact of the judgment and prevention of response to the views.

 

Against the "deep forgery" dark upgrade, from the algorithm and data training to improve anti-fraud technology

 

"In the matter of 'AI video generation', the release of Sora is a bit epoch-making, and years later we can even use 'pre-Sora era' and 'post-Sora era' to describe it. Era' to describe, such as Sora officially released for the public and a large area to spread the use of film and television, games, advertising and other industries will certainly have no small change, and in the field of AI-generated video fraud will also have an impact on the cost of fraudsters and the threshold of the crime is further reduced." ADVANCE.AI technology leader Fan Huaiyu said, "and the technology of the 'depth of forgery' of this fraud will be the most obvious, in the past we deal with the "depth of forgery" to monitor the identification of common methods There is the use of image features, looking for evidence of specific tampering [image feature detection], as well as [image composition authenticity comparison] method, the use of artificial intelligence technology depth of forgery of images or videos, in the image details, there will still be some with the physical object based on the normal shooting of the image of the obvious deviation between the difference between the comparison of which the anti-fraud response."

 

Fan Huaiyu also said that "in the depth of dynamic video forgery monitoring, in the past, we will detect and observe the abnormal changes in the image characteristics or composition of dynamic portraits in some specified angle twisting, we can find out whether there is forgery. With Sora's support, this kind of forged video details on the real degree will be greatly improved, vulnerabilities and cracks may become difficult to detect, which requires us to improve ADVANCE.AI's existing 'vivo detection function' product fraud identification capabilities, from the algorithm and data training efforts. "

 

Potential fraud behavior after Sora addition is not perfect, digging deeper into the different points to empower the product is the key

 

In a very short time after the shock release of Sora, ADVANCE.AI's internal team will be keenly aware of the impact that Sora may have on the anti-fraud field in the future. Although the mature product is not yet available to the public, once formally put into the market, Sora as a sharp tool will soon be aimed at by fraudsters as a potential criminal tool, so the anti-fraud front needs to make precautions in advance.

 

"This time OpenAI released Sora can produce such a big stir in various fields, with the word 'far ahead' can be more accurately defined", said Yu Jiangbo, head of AI products at ADVANCE.AI. "At a time when the general level of other AI-generated video companies is still around 4 seconds, Sora directly supports the generation of a maximum of 60-second-length videos, leapfrogging and upgrading by a factor of 15. From the official release of the DEMO video, Sora can realize seamless switching of multi-angle shots under the premise of maintaining the consistency of the main body, and the whole picture is clean and smooth, realizing the effect of a real video blockbuster. As for the use of the basic model architecture, most of the previous AI-generated videos are based on the 'big model + big data' architecture, so it is difficult to break through on the issue of AI illusion, but Sora has already had the prototype of the world model, so the content is more in line with, and the logic is more convincing. "

 

Yu Jiangbo added, "Technically speaking, Sora's technology foundation includes Transformer, image captioning, diffusion modeling, video patches, and Dit, etc., all of which have existed before, but OpenAI combines more powerful arithmetic and rich training data, which together drive Sora's innovations in the AI innovation in the field of generating video. Stronger spears inevitably give rise to stronger shields, and Sora will inevitably be applied in the financial fraud field, but by combining these technical features behind Sora, we can simulate possible fraudulent behaviors, such as those initiated by AI videos with more realistic images and fewer loopholes of in-depth forgeries, and find out what is different from the previous established in-depth forgeries and synthetic identity frauds, and then improve our ADVANCE.AI anti-fraud solution product defense capabilities, so that customers can use the peace of mind. In addition to screen recognition, more dimensional information can better detect potential fraud. These information networks can help us build a more secure anti-fraud system, including user behavioral characteristics and network relationships."

 

 

Privacy    |    Terms of use