Home / News > Soft cyber threat: how technology fights fakes on the web

How Technology fights fakes on the web. 3D tech on online casinios


Soft cyber threat: how technology fights fakes on the web

Fake news is discussed today on various platforms and levels — from comments on Instagram to the ministry. The seriousness of the fakes' impact on the development of events is debatable, but the request for their detection and neutralization is quite real. But the old-school graphics can be quite tedious, so the developers decided to switch to 3D technologies in online casinos. Well, they did very well. Now animations give additional features to slot machines, Games with live dealers. Today you're able not only to enjoy games in a convenient Gclub, but also plunge into the atmosphere of a regular casino with live dealers. This is a fashionable trend that has won the market and is progressing day by day.

This question is being tried to answer in the academic environment. The topic is under the scrutiny of conferences, scientific events, and grants. Developers are creating a solution for detecting fake news and using artificial intelligence and computational linguistics methods for this purpose. 

Current and future users of such systems include:

• large social networks, 
• online resources, 
• media 
• news aggregators, 
• state authorities 
• law enforcement agencies and official institutions.


What is fake?

We should start with the fact that a comprehensive and unified solution has not yet been developed, and even more — there is no single definition of fake news. There is a general, somewhat vague, and sometimes intuitive knowledge of what it is, but there is no precise distinction between fake news and other phenomena. 

It is difficult to draw this line since communication in social networks, media, and blogs involves complex behaviour of authors and readers, which may include, for example, playful, creative, or ironic elements that are similar in some ways to fake news.

When we talk about fake news, we mean that its creators wanted to mislead us or provided deliberately false information. It is but promoting both well-known or personal brands online employing twisting information and deceiving people.

It is interesting that in the academic environment, the task of detecting fake news is also to determine the ideologically coloured presentation of information, even if such a presentation concerns only a small part of the text.

The fact that there is no precise definition, and therefore no unambiguous signs of fake news makes it difficult to detect them by automatic methods. But the direction is actively developing, since this is almost the only effective way to counter fake content, taking into account the volume of digital content produced and the mechanics of its distribution. 

The most promising way to solve these problems is to use artificial intelligence, in particular, the latest advances in computational linguistics. This is a scientific discipline whose subject is the creation of mathematical models for describing natural languages used in applied fields such as: 

• information retrieval  
• machine translation  
• chat-bots  
• intelligent data processing solutions  
• innovations for detecting fake news 


Checking the facts

Computational linguists test different approaches for fake news. One of the most common is fact-checking. If the message contains deliberately false information, you can check it using other sources. 

These are statements like "London is the capital of Great Britain", namely the news reports telling that a specific event occurred at a specific location at a set time.

For verification, the algorithm can use a list of verified sources: experts select them as trustworthy, and this choice is subjective. You can also search for and analyze the quality of resources that have already published this news. This method allows you to combine the advantages of machine processing and expert evaluation to identify distorted facts. 

The obvious downside is the lack of predictive power of the solution: if the media receives an emergency message about the beginning of a military conflict, for example, that has not yet appeared on other resources, the system will not be able to mark it as fake or trustworthy. Only after some time will other sites begin to confirm or deny it. This delay is critical for many new media groups.


Defining the ideology

As already mentioned, in the academic environment, the task of detecting media fakes is widely understood, and it includes the identification of ideologically coloured texts. To do this, a corpus of texts is collected and then manually marked up: some of them are defined as ideologically neutral, while others are defined as biased. 

This way, we get a marked-up dataset and can use it to train the model. As a result, the system should automatically classify new texts as ideologically coloured or neutral. 

Ideology can be understood as a one-sided presentation of information when, for example, messages clearly or even aggressively represent the views of only one of the participants in a military or political conflict. 

Today, government agencies, law enforcement agencies, and political institutions are most interested in identifying such texts.

There is a narrower task of classifying ideological texts, which is their belonging to a specific ideology (liberal, conservative, radical, and others). To solve this problem, we also take a corpus of marked texts, in which each message has its own ideological affiliation.





Pure Darts - Darts Suppliers