At the same time, it is necessary to create a single telephone anti-spam. The server part will be located at large telecom operators, and the client part - at ordinary subscribers. Moreover, for the anti-spam system, you can safely use AI. After receiving a spam call, the subscriber marks the number as spam, and the data is immediately sent to the telecom operator. The telecom operator, after receiving, for example, five such messages, turns on the AI to listen to the call from the spammer. If the spammer repeats the same thing over the course of three calls, then this is clearly a telephone advertisement. In this case, you do not even need to interfere or decipher the conversation. It is enough to analyze the raw data coming from the spammer. If the data is identical (for example: "A new dentistry invites you ..."), then you can safely say that this is spam. In this case, the confidentiality of the negotiations will not even be violated.
Systems is to make blocking as fast and effective as possible, and to increase the costs of spam services to an unacceptable level.
Google: Generative AI will increase the level of cyber attacks
10.11.2023
Thanks to generative artificial intelligence, cyberattacks cyprus mobile database become much “smarter.” That’s the main conclusion of Google’s new report “ Cloud Cybersecurity Forecast 2024,” reports ZDNet.
Technology is becoming increasingly smarter thanks to developments like generative AI, and that includes cybersecurity attacks. Google’s new cybersecurity forecast shows that the rise of artificial intelligence will lead to new threats that you need to be aware of.
The report says that generative AI and large language models (LLM) will be used in various cyber attacks such as phishing, SMS spam and other social engineering operations to make content and materials such as voice and video appear more legitimate.
For example, generative AI will be harder to detect hard evidence of phishing attacks, such as typos, grammatical errors, and lack of cultural context, because it is good at mimicking natural language.
In other cases, attackers can provide LLM with legitimate content and generate a modified version that suits their purposes but retains the style of the original material.
The report also predicts further development of LLM and other generative AI tools offered as a paid service to help attackers conduct their attacks more efficiently and at a lower cost.