Google's new patent: using machine learning to identify " disinformation " on social media
New techniques for monitoring social media.
Google has an application with the U.S. Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider " misinformation " on social media.
Google already uses elements of AI in its algorithms, programmed to automate censorship on its vast platforms, and this document outlines a specific path the company plans to follow.
The overall goal of the patent is to identify information operations (IO), and then the system is supposed to " " predict when there is " wrong information " in it.
Judging by the explanation Google attached to the application, it seems at first that it blames its own existence for the spread of " misinformation " - the text states that information operations campaigns are cheap and widely used because it is easy to post their messages virally thanks to " amplification stimulated by social media platforms. "
But it seems Google is developing the tool with other platforms in mind.
The technology giant specifically states that others (mention X, Facebook and LinkedIn by name in the application) could have the system train their own " different prediction models. "
Machine learning itself depends on giving algorithms a large amount of data, and there are two types - " supervised " and " unsupervised," where the latter works by providing an algorithm with huge data sets (such as images or in this case language), and asking it to " learn " to identify what it is " looking for."
(Reinforcement learning is part of the process - essentially training the algorithm to become increasingly efficient at detecting what those creating the system are looking for.)
The ultimate goal here is most likely for Google to make its " disinformation detection, " i.e. censorship more efficient while targeting a specific type of data.
Indeed, the patent states that it uses language models of neural networks (with neural networks representing the " infrastructure " of ML).
Google's tool classifies data as IO or benign, and further aims to label it as coming from an individual, an organization or a country.
And then the model predicts the probability that that content is a " disinformation campaign " by assigning a score to it.
https://docs.reclaimthenet.org/US-20230385548-A1-I.pdf
In Album: Paul De Haeck's Timeline Photos
Dimension:
800 x 450
File Size:
385.01 Kb
Angry (1)
Loading...