|
|
16-02-2021, 13:11
|
#151
|
Registered User
Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,572
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
To be clear then, are you convinced that the majority of FB users have been adversely affected (eg polarized, angered, confused, misled) by political or other misinformation sent by the FB algorithms? Can we even assume that a majority of FB users engage in political dialogue?
|
I don't assume anything, and I don't have any data regarding the proportion of those on the platform that have been adversely affected. I'm not sure what that has to do with your claim that, "studies that point to the echo-chamber tendencies of FB and other social media are in effect saying that the lazy/stupid/more gullible... get stupider."
I simply asked, where are these studies?
Quote:
Originally Posted by Lake-Effect
If I liked a picture of bicycles, and get sent more pictures of bicycles, is this harmful?We agree that algorithms present the user with content it thinks s/he will like. To some... that's a feature, not a bug. Certainly more problematic with misinformation. I'm just hoping that the baby doesn't get tossed with the bathwater.
|
I'm not sure what you're chafing against. I haven't said everything FB has done is wrong or bad. In fact, I've said the opposite. But in your example, the simplistic view is of course no, nothing is wrong with more bicycle pictures. But maybe pictures embedded in ads that use psychological manipulation based on what FB knows about you to convince you to buy more bicycles that you don't need... this might be harmful.
|
|
|
16-02-2021, 13:13
|
#152
|
Registered User
Join Date: Jul 2011
Location: USA
Posts: 1,011
|
Re: Addressing Misinformation and Harmful Content Online
The greatest security threat of the post-truth age
....If home security is about making sure our possessions are safe, financial security is about keeping our money safe, national security is about keeping our country safe, then epistemic security is about keeping our knowledge safe.
Episteme is a Greek philosophical term, meaning "to know". Epistemic security therefore involves ensuring that we do in fact know what we know, that we can identify claims that are unsupported or not true, and that our information systems are robust to "epistemic threats" such as fake news.
In our report [ Tackling threats to informed decision-making in democratic societies], we explore the potential countermeasures and areas of research that may help preserve epistemic security in democratic societies. But in this article, let's look at four key trends that have exacerbated the problem, and made it increasingly difficult for societies to respond to pressing challenges and crises:
1. Attention scarcity...
2. Filter bubbles and bounded rationality...
3. Adversaries and blunderers...
4. Erosion of trust...
In our report, we explore some of the possible consequences if we don't act. One of the worst-case scenarios we called "epistemic babble". In this future, the ability for the general population to tell the difference between truth and fiction is entirely lost. Although information is easily available, people cannot tell whether anything they see, read or hear is reliable or not. So, when the next pandemic comes along, co-operation across society becomes impossible. It's a chilling idea – but Covid-19 has shown that we're closer than we might once have thought.
__________________
The greatest deception men suffer is their own opinions.
- Leonardo da Vinci -
|
|
|
16-02-2021, 13:17
|
#153
|
Registered User
Join Date: May 2011
Location: Lake Ont
Posts: 8,570
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by valhalla360
To put it in context, if even 2-3% of the voters in the last election were misled into changing their vote, it could change the results of the election. Actually even less if they are targeting certain swing states.
|
Well, if "election advertising" isn't the very definition of "political misinformation", I don't know what is. So I don't think you've made a point there.
Quote:
Let's expound on your bike example except it's a car guy who's into muscle cars and instead of more muscle car photos, the system sends articles and groups vilifying muscle car owners as evil into his feed. It's also targeting known associates with material with encourages them to vilify muscle car owners. And of course since it's all a black box it's almost impossible to prove that this process is ongoing.
|
Sorry, I don't think that's any kind of useful analogy, or reflects how the algorithms work.
I don't believe that FB went out of its way to remove messages simply for being "conservative". I believe the stated intent was to remove content that was one or more of: false or misleading, libellous, prejudicial, inflammatory, promoted or incited unlawful action.
Of course, banning Trump is both the extreme example, and an exceptional case, and I don't honestly know the calculus behind that, beyond the sheer amount of misinformation. Did someone somewhere, in or out of Facebook, actually decide that the guy was gonna cause a war if he wasn't muzzled?
|
|
|
16-02-2021, 13:24
|
#154
|
Registered User
Join Date: May 2011
Location: Lake Ont
Posts: 8,570
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Mike OReilly
I'm not sure what you're chafing against. I haven't said everything FB has done is wrong or bad. In fact, I've said the opposite. But in your example, the simplistic view is of course no, nothing is wrong with more bicycle pictures. But maybe pictures embedded in ads that use psychological manipulation based on what FB knows about you to convince you to buy more bicycles that you don't need... this might be harmful.
|
Your last example - how is that different from ANY advertising? Of course some people would like to sell you a bike. And some people buy bikes for irrational reasons (-cough -). This is wrong?
edit: Ok, your objection is the extent to which FB exploits what it knows about you to "play dirty" with ads. yes?
|
|
|
16-02-2021, 13:50
|
#155
|
Registered User
Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,572
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
Your last example - how is that different from ANY advertising? Of course some people would like to sell you a bike. And some people buy bikes for irrational reasons (-cough -). This is wrong?
edit: Ok, your objection is the extent to which FB exploits what it knows about you to "play dirty" with ads. yes?
|
Appreciate your edit ... yes. But I agree, it's not an easy thing for any of us to assess or understand. Of course all advertising is about convincing us to buy stuff, but what is different with these new tools like FB and Google is that they are able to take this manipulation to a whole new level. Using the immense information they gather about its users, and even more importantly, how they're able to understand us in aggregate terms, they can essentially know us better than we know ourselves.
Advertisers and marketers are the original practical psychologists, and FB et al. is just taking this to a whole new level. But is it part of a continuum, or is some sort of quantum leap where we really are talking about something new, and far more powerful? I tend to think so, but I struggle with the question.
Whether it's a continuum, or as I believe, a whole new thing, the power these companies now possess to disseminate and manipulate is unparalleled in history. They are not doing it for political gain. They're doing it for profit -- just like all companies do.
|
|
|
16-02-2021, 13:52
|
#156
|
Registered User
Join Date: Apr 2013
Posts: 11,004
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
Well, if "election advertising" isn't the very definition of "political misinformation", I don't know what is. So I don't think you've made a point there.
Sorry, I don't think that's any kind of useful analogy, or reflects how the algorithms work.
I don't believe that FB went out of its way to remove messages simply for being "conservative". I believe the stated intent was to remove content that was one or more of: false or misleading, libellous, prejudicial, inflammatory, promoted or incited unlawful action.
Of course, banning Trump is both the extreme example, and an exceptional case, and I don't honestly know the calculus behind that, beyond the sheer amount of misinformation. Did someone somewhere, in or out of Facebook, actually decide that the guy was gonna cause a war if he wasn't muzzled?
|
There's a reason traditional media sources have started putting disclaimers about who paid for it on commercials and politicians state if they approved an ad...if there is an issue, the tv station can say they were just a conduit selling ad space. If something moves into the area of misinformation, it's clear who to go after. Unless it's wildly obvious that something is false, they just stay out of it. Of course, they may be vetting a few dozen commercials in a typical election cycle for wildly false issues...a few hundred tops.
If you believe, that FB and social media companies are not purposely molding the message for political gain, you are naïve. Of course, they rarely do it in a direct manner (such as banning Trump). That's what makes it so dangerous. It's difficult to tell exactly what their intent is. I don't buy the "stated intent" for a minute. That's lawyers doing a CYA attempt.
As far as the analogy, you are correct, in many ways it's much more subtle. I was just trying to get the concept across not the level of sophistication. If the bulk of the target audience can easily see it for what it is, they will resist it and it will be ineffective. So it won't be 100% muscle cars are evil. They will slip in a story about an EV's somehow being better than muscle cars. Friends who have demonstrated a preference for EV's will get the guys muscle car posts highlighted, knowing some will challenge him on it. All intended to gradually push him into the "correct" way of thinking.
Old folks often struggle with the concept of how much and how subtle they can be. In the old days, the KGB simply didn't have enough informants to watch and report on every possible target at all times. With AI and machine learning, yes, FB can monitor and respond to every post and do it in subtle ways. Google and Apple are listening in on your phone...they are everywhere and they are compiling a dossier on every individual. In most cases, it's for financial gain but only a fool would think it wouldn't be used for other more nefarious purposes if there aren't some restrictions put in place.
|
|
|
16-02-2021, 14:58
|
#157
|
Registered User
Join Date: Apr 2007
Location: Australia
Boat: Island Packet 40
Posts: 6,501
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by SailOar
The greatest security threat of the post-truth age
....If home security is about making sure our possessions are safe, financial security is about keeping our money safe, national security is about keeping our country safe, then epistemic security is about keeping our knowledge safe.
Episteme is a Greek philosophical term, meaning "to know". Epistemic security therefore involves ensuring that we do in fact know what we know, that we can identify claims that are unsupported or not true, and that our information systems are robust to "epistemic threats" such as fake news.
In our report [ Tackling threats to informed decision-making in democratic societies], we explore the potential countermeasures and areas of research that may help preserve epistemic security in democratic societies. But in this article, let's look at four key trends that have exacerbated the problem, and made it increasingly difficult for societies to respond to pressing challenges and crises:
1. Attention scarcity...
2. Filter bubbles and bounded rationality...
3. Adversaries and blunderers...
4. Erosion of trust...
In our report, we explore some of the possible consequences if we don't act. One of the worst-case scenarios we called "epistemic babble". In this future, the ability for the general population to tell the difference between truth and fiction is entirely lost. Although information is easily available, people cannot tell whether anything they see, read or hear is reliable or not. So, when the next pandemic comes along, co-operation across society becomes impossible. It's a chilling idea – but Covid-19 has shown that we're closer than we might once have thought.
|
And the real culprit here is not the internet.
Groups whose agendas are not the propagation of objective truths have been targeting western educational institutions for decades. In Australia it is known as "the long march through the institutions" and for these folks it is now coming to fruition. Having access to the alternate and disparate sources of information provided by the internet may be our savior in the long run.
__________________
Satiriker ist verboten, la conformité est obligatoire
|
|
|
16-02-2021, 15:22
|
#158
|
Registered User
Join Date: May 2011
Location: Lake Ont
Posts: 8,570
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Mike OReilly
Appreciate your edit ... yes. But I agree, it's not an easy thing for any of us to assess or understand. Of course all advertising is about convincing us to buy stuff, but what is different with these new tools like FB and Google is that they are able to take this manipulation to a whole new level. Using the immense information they gather about its users, and even more importantly, how they're able to understand us in aggregate terms, they can essentially know us better than we know ourselves.
Advertisers and marketers are the original practical psychologists, and FB et al. is just taking this to a whole new level. But is it part of a continuum, or is some sort of quantum leap where we really are talking about something new, and far more powerful? I tend to think so, but I struggle with the question.
|
Roughly half of my 22 year career in web app development was in online advertising. I've been under the hood of two major agencies and several national brands. And, like you, I'm also a person who's online frequently. Admittedly not on FB. But google ads chase me like everyone else.
From both these perspectives... I'm not (yet) that worried. The information they collect... they don't yet know how to use with anywhere near diabolical precision. About the most blatant lately is Banggood & Aliexpress: after I happen to browse something on their sites...BOOM! they're in the CF sidebar, showing me just what i was looking at. And guess what - sometimes i click through! This isn't being evil, it's being persistent and opportunistic, and since my BG/AliExpress habit is less than $60/month... i'm a willing stooge. Ditto for ebay or Amazon stuff I've looked at. OR... Google ads think I'm obsessed with EARWAX! (why Google why?) seriously, WTF. I've never gone searching for earwax-related anything.
So... I'm convinced that they're still pretty dumb overall. AI is automated idiocy. They're not making full use of the data they have (there's just too much), and parts of the internet ad systems are still abysmally stupid (why do they keep serving ads that you've closed repeatedly, or reported??) .
Who I do fear are the actual political powerbrokers who have the time and motivation to really mine the data for targeted advertising. The whole Cambridge Analytica thing for example. And FB's willingness to sell/give them data, then let them microtarget using that data. THAT is evil. And that - consumer data - is the first place there should be legislation, like the EU is doing.
Quote:
Whether it's a continuum, or as I believe, a whole new thing, the power these companies now possess to disseminate and manipulate is unparalleled in history. They are not doing it for political gain. They're doing it for profit -- just like all companies do.
|
It has the potential. It's not too late to guard against the deepest abuses.
|
|
|
16-02-2021, 15:26
|
#159
|
Registered User
Join Date: May 2011
Location: Lake Ont
Posts: 8,570
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by valhalla360
If you believe, that FB and social media companies are not purposely molding the message for political gain, you are naïve.
...
In most cases, it's for financial gain but only a fool would think it wouldn't be used for other more nefarious purposes if there aren't some restrictions put in place.
|
Would somebody PLEASE tell me what political gain/agenda/whatever that companies like FB are pursuing?
As I mentioned in reply to Mike, it's not FB et al you should worry about, it's the companies/consultants they are willing to share their data with.
|
|
|
16-02-2021, 15:45
|
#160
|
Registered User
Join Date: Jul 2011
Location: USA
Posts: 1,011
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
....About the most blatant lately is Banggood & Aliexpress: after I happen to browse something on their sites...BOOM! they're in the CF sidebar, showing me just what i was looking at. And guess what - sometimes i click through! This isn't being evil, it's being persistent and opportunistic, and since my BG/AliExpress habit is less than $60/month... i'm a willing stooge. Ditto for ebay or Amazon stuff I've looked at. OR... Google ads think I'm obsessed with EARWAX! (why Google why?) seriously, WTF. I've never gone searching for earwax-related anything....
|
I use an ad-blocker, so never see ads in CF. I also automatically delete cookies every time I close my browser (Firefox or Opera), and I use CCleaner periodically to scrub other trackers out of my computer.
__________________
The greatest deception men suffer is their own opinions.
- Leonardo da Vinci -
|
|
|
16-02-2021, 16:22
|
#161
|
Registered User
Join Date: Sep 2011
Location: Good question
Boat: Rafiki 37
Posts: 14,572
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
It has the potential. It's not too late to guard against the deepest abuses.
|
On this we can completely agree .
Quote:
Originally Posted by SailOar
I use an ad-blocker, so never see ads in CF. I also automatically delete cookies every time I close my browser (Firefox or Opera), and I use CCleaner periodically to scrub other trackers out of my computer.
|
Yup, no ads here. I keep trackers at bay via Ghostery (there are eight on this very page trying to track me). I don't use Google anything, and of course clear out cookie.
|
|
|
17-02-2021, 05:09
|
#162
|
Senior Cruiser
Join Date: Mar 2003
Location: Thunder Bay, Ontario - 48-29N x 89-20W
Boat: (Cruiser Living On Dirt)
Posts: 51,355
|
Re: Addressing Misinformation and Harmful Content Online
Virologist Angela Rasmussen laments the spread of unverified claims about the origins of COVID-19. There needs to be a lot less speculation, and a lot more investigation.
'Scientists said claims about China creating the coronavirus were misleading. They went viral anyway.'
More ➥ https://www.washingtonpost.com/techn...n-li-meng-yan/
__________________
Gord May
"If you didn't have the time or money to do it right in the first place, when will you get the time/$ to fix it?"
|
|
|
17-02-2021, 07:20
|
#163
|
Senior Cruiser
Join Date: Mar 2003
Location: Thunder Bay, Ontario - 48-29N x 89-20W
Boat: (Cruiser Living On Dirt)
Posts: 51,355
|
Re: Addressing Misinformation and Harmful Content Online
“The Use of Social Media by United States Extremists*”
[*Far-left, far-right, Islamist, and single-issue extremists]
RESEARCH BRIEF SUMMARY
"Emerging communication technologies, and social media platforms in particular, play an increasingly important role in the radicalization and mobilization processes of violent and non-violent extremists Archetti, 2015; Cohen etal., 2014; Farwell, 2014; Klausen, 2015). However, the extent to which extremists utilize social media, and whether it influences terrorist outcomes, is still not well understood (Conway, 2017). This research brief expands the current knowledge base by leveraging newly collected data on the social media activities of 479 extremists in the PIRUS dataset who radicalized between 2005 and 2016.1 This includes descriptive analyses of the frequency of social media usage among U.S. extremists, the types of social media platforms used, the differences in the rates of social media use by ideology and group membership, the purposes of social media use, and the impact of social media on foreign fighter travel and domestic terrorism plots. The PIRUS data reveal four key findings on the relationship between social media and the radicalization of U.S. extremists ..."
More ➥ https://www.start.umd.edu/pubs/START...f_July2018.pdf
__________________
Gord May
"If you didn't have the time or money to do it right in the first place, when will you get the time/$ to fix it?"
|
|
|
17-02-2021, 08:49
|
#164
|
Senior Cruiser
Join Date: Mar 2003
Location: Thunder Bay, Ontario - 48-29N x 89-20W
Boat: (Cruiser Living On Dirt)
Posts: 51,355
|
Re: Addressing Misinformation and Harmful Content Online
__________________
Gord May
"If you didn't have the time or money to do it right in the first place, when will you get the time/$ to fix it?"
|
|
|
17-02-2021, 08:52
|
#165
|
Registered User
Join Date: Sep 2010
Posts: 2,909
|
Re: Addressing Misinformation and Harmful Content Online
Quote:
Originally Posted by Lake-Effect
... doesn't the Citizens United decision effectively give the same rights, including 1st Amendment, to the "speech" of private entities like corporations?
|
I meant that they don't have to abide by the restrictions of the First Amendment, not that they don't get to enjoy it's benefits.
Kind of a big difference, but an interesting irony.
__________________
Founding member of the controversial Calypso rock band, Guns & Anchors!
|
|
|
|
|
Thread Tools |
Search this Thread |
|
|
Display Modes |
Rate This Thread |
Linear Mode
|
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
|
Advertise Here
Recent Discussions |
|
|
|
|
|
|
|
|
|
|
|
|
Vendor Spotlight |
|
|
|
|
|