hate

Equifax hack reminds everyone how much they hate credit agencies


Congratulations! Your debt will follow you forever.
Congratulations! Your debt will follow you forever.

Image: OJO Images/REX/Shutterstock

Let the Equifax hate begin. 

Not only did the company expose the personal information of 143 million people, including names, Social Security numbers, addresses, birth dates, and even some driver’s license numbers. It also turns out that three of its executives sold $1.8 million worth of company stock right after the breach was discovered. 

And let’s just recap what Equifax does, shall we? Say someone struggles to pay back a student loan, or is saddled with debt because the credit card marketed to them in college suddenly has a 30-percent APR. 

Equifax, Experian, and TransUnion are the three companies that make sure late payments follow you around in the form a credit score, which can prevent you from getting a home or car loan. 

On Twitter, it was obvious people didn’t have a lot of pity for Equifax.

But, it’s not like Equifax was forced to pay millions earlier this year on charges that it advertised free or $1 credit services that actually cost more than $200 a year. 

Oh, it was? This should end well. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81555%2f9e4b5d81 1bd2 4a5e 883c 2073bc2c4fc6



لینک منبع

The ‘first major hate site on the internet’ is down


Former Ku Klux Klan imperial wizard Don Black showed the world how the internet could galvanize hate beginning in 1995. 

Now the website he founded that year, Stormfront — called the “first major hate site on the Internet” by the SPLC — is offline. 

Domain provider Web.com has put a hold on the site after the Lawyers’ Committee For Civil Rights Under Law wrote a letter to the company’s CEO, pointing out that Stormfront violated Web.com’s policy stating “users may not … utilize the services in a manner deemed, in company’s sole discretion, to display bigotry, racism, discrimination, or hatred in any manner whatsoever” [PDF].

“Web.com’s decision to terminate services to Stormfront.org demonstrates that the company would enforce its own policies in order to support its diverse customers and the online community as a whole,” the committee wrote in a press release provided to Mashable

Black founded Stormfront in 1995 after teaching himself some computer programming during three years he spent in jail for “plotting to overthrow a Caribbean island government,” according to the SPLC. Upon his release, he built Stormfront into a haven for budding white supremacists to vent their rage online and to meet others who felt like them. 

Black grew his forum to a membership of around 300,000, according to the SPLC. Though the number of active users was much smaller, some of the site’s readers have committed violent crimes.

Stormfront readers include Richard Poplawski, who killed three Pittsburgh police officers in 2009 as they came to his door after his mother called 911 following an argument; Richard Baumhammers, who went on a murderous rampage in 2000 — also in Pittsburgh — that involved killing his 63-year-old Jewish neighbor and vandalizing her synagogue with swastikas; and Anders Breivik, who killed 77 people in Oslo, Norway, by setting off a bomb and committing a mass-shooting at a political youth camp. 

Black denied any influence his site had over the readers who visited it who became violent, similar to the way more modern white supremacists have sought to distance themselves from a neo-Nazi who allegedly murdered a woman named Heather Heyer during a white supremacist riot in Charlottesville, Virginia, earlier this month. 

Stormfront’s disappearance is a blow to white supremacists.

Stormfront’s disappearance is a blow to white supremacists, and not the first to come to their internet havens over the past few weeks. 

Following the Charlottesville riot, The Daily Stormer — something of a Stormfront evolution — has bounced around the internet in search of a home as domain hosts have booted it back and forth across the web. 

The removal of such sites from the internet has been cheered with caution. While removing the megaphone from voices of hate seems ostensibly a good thing, many, including the EFF, worry that doing so will lead to a world in which domain hosts can remove any website they dislike at any time, for reasons they don’t have to necessarily disclose.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81242%2fb6d3a512 c741 4ae5 9f87 ef4bc2b1cff0



لینک منبع

The Electronic Frontier Foundation issues a warning to companies banning hate groups


The Electronic Frontier Foundation has a message for the tech companies that have been banning hate groups — be very careful.

The EFF, a nonprofit that focuses on “defending civil liberties in the digital world,” published a post on Thursday night that warned against the precedents set by the ongoing crackdown by major tech companies on websites like The Daily Stormer, a message board popular among the far right, including white supremacists and neo-Nazis.

“All fair-minded people must stand against the hateful violence and aggression that seems to be growing across our country. But we must also recognize that on the Internet, any tactic used now to silence neo-Nazis will soon be used against others, including people whose opinions we agree with,” wrote EFF staffers Jeremy Malcolm, Cindy Cohn, and Danny O’Brien.

Major tech companies have been cracking down on hate groups like never before in the wake of violence during protests in Charlottesville, Virginia. One woman died after a man with ties to far-right organizations allegedly drove his car into a group of counter-protestors.

Since then, Google, Facebook, Spotify, Squarespace, and other companies have taken action, garnering a mostly positive public response. 

The EFF’s post doesn’t come as a surprise; the organization is known to advocate against censorship on the internet.

The EFF noted that companies can choose what kind of speech to allow, but warned that companies are entering dangerous territory because of how much power they wield.

“We strongly believe that what GoDaddy, Google, and Cloudflare did here was dangerous. That’s because, even when the facts are the most vile, we must remain vigilant when platforms exercise these rights. Because Internet intermediaries, especially those with few competitors, control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world,” the post stated.

The EFF also warned that such action could be taken against anyone. 

“We would be making a mistake if we assumed that these sorts of censorship decisions would never turn against causes we love,” the post stated.

The EFF post had some supporters, most notably Cloudflare CEO Matthew Prince. Cloudflare ceased doing business with The Daily Stormer and characterized the move as his unilateral decision. The company has traditionally maintained a hardline against censoring anything on the internet. While holding to the decision, Prince noted that the EFF post was “exactly on point.”

Others also applauded the EFF’s stance.

Others weren’t as convinced, arguing that the EFF downplayed just how toxic these groups have become.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81242%2fb6d3a512 c741 4ae5 9f87 ef4bc2b1cff0



لینک منبع

Where hate speech is thriving online


Gab launched almost exactly a year ago with a definite ethos: Free speech at all cost. 

That’s proving to be an attractive proposition for far-right elements that are now flocking to the platform after being driven out of the internet’s biggest shared spaces. 

The Twitter-like service has received an influx of users—and money—in the past few weeks as internet companies have cracked down on hate speech following the violence in Charlottesville, Virginia. It’s just one of the places on the internet that are becoming a haven for far-right individuals and groups, as well as a growing population that sees Silicon Valley companies as abusing their positions of power.

Andrew Torba, a former ad tech founder who launched Gab, illustrated the hard line he feels has been abandoned.

“This is just another great example of the ideological echo chamber in Silicon Valley. You see all these companies one after another coming out with the same exact message, same exact stance. There’s nobody saying ‘no, we stand for free speech. We hate some of the vile things that are said.’ Either you support free speech or you do not. Period.” 

Torba is far from alone. Gab’s fundraising efforts netted $500,000 in the past week (bringing its total to more than $1 million). In the last 30 days, Gab has had more than 25,000 sign-ups. The site currently has 225,000 users creating approximately 1.2 million posts (or “gabs”) per month.

Gab’s success comes after tech giants including Google and Facebook, which in the past had been shy about policing hate speech, began to forbid white supremacists from using their services in the wake of the violent events in Charlottesville, Virginia last weekend.

Even Cloudflare, a hosting company that had to take a public stance in favor of free speech even if it meant its services were open to extreme views, stopped working with the Daily Stormer, a white supremacist website.

After that, Daily Stormer founder Andrew Anglin said—on Gab—that he was “Gab 100%.”

Tech’s growing comfort with cracking down on hate speech, Torba said, is a “slippery slope.” He spoke passionately against censorship when asked about his reaction to Spotify and other music-streaming services pulling music by white supremacist bands. 

He also pointed to the firing of James Damore, the Google engineer whose manifesto went viral, as having driven people to Gab.

“Are we going to start burning books next?”

“Are we going to start burning books next? Because this is really the internet, the 21st century equivalent, of banning and burning books,” Torba said.

Gab isn’t the only website that has received an influx of users who feel alienated by the mainstream internet. There’s WeSearchr, which has become a destination for crowdfunding far-right causes. Websites 4chan and 8chan grew in popularity after Reddit began to crack down on hate groups. 

A digital platform expert, who requested anonymity due to fear of being doxxed, noted that 4chan and 8chan now host some of the most vile speech on the internet.

“Since it’s anonymous, they’ve always said hateful stuff but they’ve taken up under the banner of Trump to make peoples’ lives a living hell on Reddit,” the expert said. “There’s a specific board called /pol/ that harbors a lot of people who just spew most of the racist, hate speech that you see today.”

۴chan and 8chan now host some of the most vile speech on the internet.

There’s also a business consideration. Many tech companies are feeling the public pressure of ensuring that users feel safe. To this end, most have adopted terms of service that provide some rules. Apple rejected the Gab app from its App Store several times earlier this year.

“Your app includes content that could be considered defamatory or mean-spirited,” Apple’s App Store Review team said in an email posted on Medium by Gab. “We found references to religion, race, gender, sexual orientation, or other targeted groups that could be offensive to many users.”

Apple’s CEO Tim Cook has also begun to speak publicly about the responsibility that companies have to balance free speech and company values.

“This is a responsibility that Apple takes very seriously. First we defend, we work to defend these freedoms by enabling people around the world to speak up. And second, we do it by speaking up ourselves. Because companies can, and should have values,” Cook said when accepting the Free Expression Award at the Newseum in Washington, D.C., in April.

Despite its commitment to free speech at all costs, Gab still has rules. Torba said doxxing (sharing or posting confidential information about users) has always been forbidden. The site does not allow illegal activity. When asked for examples, Torba noted death threats to the president of the United States. 

“We have members of the alt-left making death threats to the president of the United States. It’s not only against our guidelines but against the law,” Torba said. 

With $1 million in crowdfunding, Torba said he will be hiring more engineers to build out its live-streaming service (GabTV) and an Android app. 

The service is ad-free and is relying on crowdfunding and subscriptions for revenue. GabPro offers verification, save posts, lists, private group chats, other insights, and soon, live video. 

There’s more yet to come.On Wednesday, Apple Pay and PayPal pulled its services from sites selling white supremacy and Nazi products, BuzzFeed reported. 

“First they banned them, censoring them, whatever,” Torba said. “Now they, Silicon Valley, is transitioning by attacking creators by going after their sources of income.”

So, of course, Gab is launching a virtual currency.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81238%2ff8584fb5 c957 4360 b409 826b5ca39300



لینک منبع

After Charlottesville, tech companies are forced to take action against hate speech


Silicon Valley spent years preaching a hands-off approach to even the most extreme speech in the interest of connecting the entire world. 

After Charlottesville, that’s changing quickly. 

Facebook, Google, Spotify, Squarespace, and a variety of other tech companies are taking action to curb the use of their platforms and services by organizations associated with far-right organizations. The effort, though apparently uncoordinated, is among the most aggressive campaigns yet to push a particular group off the internet’s mainstream spaces.

The moves come in the immediate aftermath of a weekend of violence in Charlottesville, Virginia, where the “Unite The Right” rally, organized in part on Facebook, resulted in three deaths and dozens of injuries. The Facebook Event page for the rally, which drew a mix of white nationalists, self-identified Nazi, and the alt-right, was live for more than a month before Facebook removed it, Business Insider reported. It was only shut down one day prior to the rally. 

Now, a violent act affiliated with the alt-right is serving as a turning point, and more tech companies are continuing to take public and private actions—some unprecedented or at least unusual—against what they and their communities are flagging. Now, in the aftermath of Charlottesville, Facebook and Reddit are actively shutting down several hate groups, CNET reported. Even Cloudflare, a cloud security company with a history of taking a hardline stance against limiting the use of its services, has reportedly changed its stance.

“Previously tech companies felt like their job was to work behind the scenes and to focus on the business of business. What I think has changed since Charlottesville is the fact that companies are free to create online communities that reflect the type of communities they want to see in real life,” said Brittan Heller, ‎director of technology and society at Anti-Defamation League. 

“Companies are now realizing that now is a time of moral leadership,” Heller said, whose organization published its first report on cyberhate in 1985.

One focus of tech companies’ efforts has been to quell The Daily Stormer, a website launched in 2013 by neo-Nazi Andrew Anglin. The site’s affiliation with the alt-right and hate speech is nothing new. For example, Anglin is being sued for launching a “harassment campaign that has relentlessly terrorized a Jewish woman and her family with anti-Semitic threats and messages,” The Daily Beast reported in July.

Recently The Daily Stormer gained mainstream attention for their bigotry in Charlottesville. The group had helped to coordinate the rally. Afterwards, content on their website praised the man who murdered anti-racist protester Heather Heyes After her death, other posts personally attacked Heyes. After those events, it appears that the website is now a major problem in the eyes of the tech industry.

“Since tech companies are private entities by law they have the right to take action on their terms of service. They can make choices based on the demands of their customers and the needs of the market. This was the point that it wasn’t just freedom of expression, it was incitement to violence,” Heller said. 

On Sunday, GoDaddy told The Daily Stormer that it would stop hosting its hate-filled message boards. The company later went to Google, who banned them from using the service. Google also removed the company’s YouTube page. Email provider Zoho dropped them as a client. 

Facebook also has been more active, deleting links to The Daily Stormer article that personally attacked Heather Heyer, an anti-racist protester who was murdered after a driver struck her with his car. The article can only be shared if it includes a caption condemning the post. 

“Any shares of The Daily Stormer article that don’t include a caption will be deleted, Facebook said,” according to The Verge

And yet, GoDaddy, the first tech company to make a major public move against The Daily Stormer, had previously defended the website. 

When asked by The Daily Beast in July why GoDaddy hosts The Daily Stormer and other alt-right websites, the company cited the First Amendment and noted they have “more than 17 million customers” so they cannot monitor every lawsuit. 

Twitter’s made similar arguments in the past for its reason to allow President Donald Trump to use the platform regarding free speech and for its scale and continuous game of whack a mole when it comes to addressing abuse reports. 

When it comes to The Daily Stormer and other alt-right leaders on Twitter, the company does not comment on individual accounts for privacy and security reason, a Twitter spokesperson told Mashable

But Twitter’s community standards do include a “Hateful conduct policy,” which specifically states, “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.” 

A Twitter account claiming to be The Daily Stormer was suspended this week.

“Something that violates community terms can float under the radar for years, whether it’s a closed community or there’s just not outside or public scrutiny on what the group is doing,” said Emma Llanso, director of the Free Expression Project for the Center for Democracy & Technology. 

“It’s not going to be a perfectly applied set of terms because that’s essentially impossible for websites and online services to do given the sheer volume of content,” Llanso continued.  

Not all action took place in the aftermath. Airbnb had been made aware of the potential threat earlier in the month. The home-sharing platform had learned attendees of the rally in Charlottesville, Virginia had registered to use its services via the community and proceeded to ban them. 

“When we see people pursuing behavior on the platform that would be antithetical to the Airbnb Community Commitment, we take appropriate action. In this case, last week, we removed these people from Airbnb,” CEO Brian Chesky said in a statement.

The pushback didn’t happen as quick for some tech companies. Cloudflare still worked with The Daily Stormer until Wednesday. 

According to a statement from Cloudflare, they do not host anything and therefore do not have as much of a stake in the events, it seems. 

“Cloudflare terminating any user would not remove their content from the Internet, it would simply make a site slower and more vulnerable to attack,” a statement from the company reads, according to Quartz

But that type of laissez-faire attitude sparked outrage.

On Wednesday, Matt Sheffield of Salon tweeted an image of Anglin sharing an email of his Cloudflare subscription being terminated.

Cloudflare did not immediately respond to a request for comment. 

Also on Wednesday, Spotify pulled “hate music” from its streaming platform, after Digital Music News published a list of 27 bands it described as “white power music.” 

For several networks, including Facebook, Google, and Twitter, their terms of service against hate speech and violence firmed up in 2015, when the public and several lawmakers pushed them on addressing terrorism. They also faced scrutiny in Europe, where hosting hate speech violates laws. 

Now, the companies face a reality in which it’s “important to be able to take proactive steps under their own terms instead of having to be responsive,” said Llanso of the Center for Democracy & Technology. 

The “Unite The Right” rally has only made it increasingly difficult for platforms to stay silent as the actions are broadcasted on social media and on television and not just behind closed doors or in pockets of the United States. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f81207%2f1934c663 a4c3 466b bdf8 c2c90239c3ac



لینک منبع

Chinese drivers hate that new car smell, so Ford is trying its best to get rid of it


That “new car smell” can be pretty divisive. Some of us love it, yet the rest absolutely can’t stand it.

In China, it seems the majority hate it. And the fast-growing market is so valuable that Ford is going out of its way to get rid of the new car smell, so people don’t get turned off.

According to Reuters, Ford has appointed 18 “golden nose” smell experts in its Chinese research plant, to weed out anything that contributes to a perceptible odour in a car.

A lot of a new car’s smell comes from new fabrics and materials on steering wheels and seats. So instead of wrapping seats in plastic, as they do in many overseas markets, the car seats in China are stored in cloth bags to ventilate them before they get put in a new Ford.

China’s distaste for chemical smells comes from a society that has grown paranoid about its perennial air pollution, a local parts manufacturer was quoted by Reuters as saying.

Don Yu, China general manager at CGT Automotive, which supplies car interior material to many large makers, said: “In China, people open the car and sit inside. If the smell isn’t good enough, they think it will jeopardise their health.”

But while the Chinese hate the new car smell, plenty of other people love it.

Ford Spain figured out how to bottle that new odour for used cars, to persuade people to buy them.

And products like the Ozium air sanitizer, and the Chemical Guys’ New Car Smell promise to do what it says on the tin.

Who knows, there may be an opportunity in the Chinese market for these products to go in and provide a new car smell for the minority that still craves it.

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f7%2f82c1a579 f42c eacc%2fthumb%2f00001



لینک منبع

This slide reveals Facebook’s cringeworthy hate speech policies


Image: CAMUS/AP/REX/SHUTTERSTOCK

How does Facebook decide what is and is not hate speech? 

According to a new report from ProPublica, with slides that feature the Backstreet Boys.

The image below comes from the organization’s story that features a deep dive into Facebook’s moderation efforts, complete with never-before-seen documents that detail how moderators are trained. 

One of the most cringeworthy parts shows a slide asking, “Which of the below subsets do we protect?” It reads female drivers, black children, and white men. The section for white men shows a picture of the Backstreet Boys. 

The answer? The Backstreet Boys White men.

The policies, according to ProPublica, allowed moderators to delete hate speech against white men because they were under a so-called “protected category” while the other two examples in the above side were in “subset categories,” and therefore, attacks were allowed. 

The revelation will do little to change the narrative around how Facebook handles hate speech—or more particularly how it has not effectively protected people on the platform who might be part of a “subset.”

Facebook’s moderation policies are a complicated system, where these “protected categories” are based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease, according to ProPublica. Meanwhile, black children wouldn’t count as a protected category because Facebook does not protect age, and female drivers wouldn’t count because Facebook does not protect occupation. 

According to Facebook, its policies aren’t perfect.

“The policies do not always lead to perfect outcomes,” Monika Bickert, head of global policy management at Facebook, told ProPublica. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”

That’s a similar excuse Facebook put forth in a blog Tuesday as part of its “Hard Questions” series. 

The good news is that Facebook is trying to better itself. That includes being more transparent about its practices, which comes only after reports like ProPublica‘s and The Guardian‘s recent “Facebook Files” series. 

At least Facebook’s come far. ProPublica revealed that back in 2008, when the social network was four years old, Facebook only had a single page for its censorship rulebook, and there was a glaring overall rule:

“At the bottom of the page it said, ‘Take down anything else that makes you feel uncomfortable,’” said Dave Willner, who had joined Facebook’s content team in 2008, told ProPublica

Willner then worked to created a 15,000-word rulebook, which is still in part used today at the company. And yet, there remains to be many problematic areas on how the network polices itself. The Guardian‘s Facebook Files revealed numerous issues like how Facebook allows bullying on the site and other gray areas. 

Facebook did not immediately respond to a request for comment.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f515%2f7b44e699 e788 4f98 abad b110064acdfb



لینک منبع

TL;DR on Facebook’s hate speech strategy: ‘It’s hard, guys’


As Facebook works to become more transparent about its strategy to combat hate speech, one of its public policy leaders published a lengthy explainer on the process Tuesday. For all of the post’s enlightening details, though, it felt as if Facebook was saying, “Sorry, guys, it’s hard.”

Richard Allan, Facebook’s VP of Public Policy in Europe, the Middle East, and Asia, took to Facebook’s blog to lay it all out as part of the platform’s “Hard Questions” series.

There’s a lot of satisfying context and background here, especially in the wake of the platform’s recent agreement to fight online extremism alongside Twitter and YouTube as well as the Online Civil Courage Initiative it unveiled earlier this year. 

And while Facebook’s still dealing with untangling a complex issue, it’s a welcome example of transparency for a platform that hasn’t had the best track record in terms of how it handles offensive content.

Defining — and contextualizing — hate speech

There’s a lot of discussion about the boundaries of hate speech and how the platform decides on what, exactly, constitutes hate speech. Allan even compares the issue in Germany, which we’ve touched on before, to that in America.

In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.

It makes sense, in a way, to name-check Germany here; after all, the raids Allan refers to have actually happened. 

And Germany is one of the countries that have gone all-in on forcing Facebook to deal with the hate speech issue (and also the reason that a VP from Facebook EMEA is addressing this as opposed to an American exec). 

What’s especially interesting is that Allan offers up actual numbers as an example of how Facebook removes hate speech. 

Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.*)

See that asterisk at the very end, though? It refers to all the caveats that come with reporting those numbers. Here’s a complete screenshot of all of those. 

And that’s really the issue here. 

Facebook is working to fight hate speech (which is good!) and, in this report, there are actually statistics (including the move to add “3,000 people to our community operations team around the world”). Plus, there’s a reference to AI experiments which sounds an awful lot like what Alphabet has been doing to screen toxic comments.

But, guys, it’s, like, super hard and stuff. And Facebook really, really wants us to know that to the point of over-explaining the problem. 

There are so many qualifiers to one set of numbers, it’s easy to lose track about where, if any, ground was gained. For all the transparency, the actual progress still remains clouded, both for us as users and, apparently, for the platform itself. 

Time to get going

Allan is exhaustive in the examples he lists and the explanation of how “context” and “intent” matter. And with good reason: as the platform has grown, the monitoring of content has become an even bigger issue as evidenced by The Guardian‘s recent “Facebook Files” series, which highlighted numerous issues, from suicides streamed on Facebook Live to the ins-and-outs of Facebook’s moderators.

Look, this stuff is hard and good on Facebook for continuing to work on it (we’re looking at you, Twitter) and sharing this long (looooooong) post about it. 

Free speech has always been a real sticky wicket, particularly here in the United States, and Facebook is still a relatively new platform (you know, compared to, say, the printing press). And nothing about this is going to be smooth and graceful and whatever solution they come up with is going to be a bit more complicated than flooding ISIS’ Facebook page with niceties to combat the hate. 

But we’re also smart enough to understand words have different meanings to different cultures. We also understand it’s a messy thing and that it’s going to take time and (a lot of) work and that, in the end, someone’s going to be mad.

There are now 2 billion Facebook users globally so these issues are only to keep circulating until the platform implements its solutions, and even then there will still be work to do.

So hurrah to Facebook for being more open about the process, but it’s time to move forward with more concrete solutions. 

Https%3a%2f%2fvdist.aws.mashable.com%2fcms%2f2017%2f6%2faaea252d f609 7017%2fthumb%2f00001



لینک منبع

Police raid homes of 36 accused of spewing hate online in Germany


Police secure a stadium entrance prior to the German DFB Cup final soccer match.
Police secure a stadium entrance prior to the German DFB Cup final soccer match.

Image: clemens BILAN/EPA/REX/Shutterstock

German police searched the homes of three dozen people on Tuesday, all of whom are accused of writing hateful messages on social media.

The raids illustrate significant differences between how Germany treats hate speech online when compared with the United States. 

The 36 targeted people are accused of threatening others, racism, and other forms of harassment, and most of them are classified as “right wing,” according to The New York Times

In the U.S., speech meant to threaten is illegal, but hate speech is not. In Germany, both can result in a visit from police. German law prohibits speech that “incites hatred” against a number of groups, including groups defined by race, national origin, and religion.

This gives Germany a broader scope to go after individuals for their social media actions, and it’s indicative of a larger trend in which the German government (as well as several other European Union governments) has shown its exasperation with social media’s ability to perpetuate hate and misinformation. 

The German government is considering a law that would force social media sites such as Twitter and Facebook to build a system that allows for the takedown of hateful posts within 24 hours. Posts that are debatably hateful would be given a week before they were determined permissible or not. Noncompliance could result in €۵۰ million fines for social media sites.  

If it passes, the law would be yet another European move to make social media sites pay significant sums of money for not complying with European standards of online behavior and practice. 

In 2018, for example, European Union governments will be allowed to fine Facebook, Google, and other online giants up to 4 percent of their yearly revenue (read: billions of dollars) for violating privacy rights of people in the EU. Such violations include using personal data to target specific ads to EU residents, something that happens all the time in the U.S. 

This latest legal battle between an EU government and an online titan shows the fight isn’t just about privacy, but also about how much responsibility social media sites have for what people say on them.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f2045%2ff2f4fbb1 b12f 4fb7 883a 6382354c7287



لینک منبع