Category Archives: social media

Social Scientists Create “Deepfake” Software Allowing Anybody To Edit Anything Anyone Says On Video

From the university who employees Christine Blasey, Scientists at Stanford are doing their part to create what will be an inevitable dystopian nightmare.

The staff at the Max Planck Institute for Informatics, Princeton University and Adobe Research have developed software that allows you to now edit and change what people are saying in videos, allowing anyone to edit anybody into saying anything, according to Observer.

The software uses machine learning and 3-D models of the target’s face to generate new footage which allows the user to change, edit and remove words that are coming out of a person’s mouth on video, simply by typing in new text. Not only that, the changes appear to have a seamless audio/visual flow without cuts.

Here’s a video of the frightening software at work.

We’re sure there will be absolutely no blowback at all to this. After all, just last week, there was public outrage with somebody jokingly edited a video of Nancy Pelosi to make her seem drunk. What would happen if somebody edited a video of her speaking to have her swear wildly, or say racist things?

This deepfake software is already being described as “the equivalent of Christmas coming early for a Russian troll farm”, now that the 2020 election is underway. We’re sure it’ll eventually also be a topic du jour on MSNBC and CNN if Trump wins again in 2020. 

And we have to ask: how long before the software is incorporated into Adobe‘s retail video editing software? After all, the software company already forces users to read a massive disclaimer that states:

We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.

And…

We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.

Are they covering themselves legally for this “technology” to go mainstream?

Meanwhile, joke deepfakes continue to pop up, like this fake video of Mark Zuckerberg sitting at a desk giving a nefarious sounding speech about Facebook‘s power.

 

Joe Rogan was also victim to a deepfake by the AI company Dessa recently, who released audio making it sound like he is discussing chimpanzee hockey.

Don’t worry though, we’re sure this won’t fall into the wrong hands.

Source: ZeroHedge

Advertisements

All Four Major Wireless Carriers Hit With Lawsuits Over Sharing, Selling Location Data

(TechDirt) We’ve noted repeatedly that if you’re upset about Facebook’s privacy scandals, you should be equally concerned about the wireless industry’s ongoing location data scandals. Not only were the major carriers caught selling your location data to any nitwit with a checkbook, they were even found to be selling your E-911 location data, which provides even more granular detail about your data than GPS provides. This data was then found to have been widely abused from everybody from law enforcement to randos pretending to be law enforcement.

Throughout all this, the Ajit Pai FCC has done absolutely nothing to seriously police the problem. Meaning that while carriers have promised to stop collecting and selling this data, nobody has bothered to force carriers to actually confirm this. Given telecom’s history when it comes to consumer privacy, somebody might just want to double check their math (and ask what happened to all that data already collected and sold over the last decade).

Compounding carrier problems, all four major wireless carriers last week were hit with a class action lawsuit (correctly) noting the carriers had violated Section 222 of the Federal Communications Act by selling consumer proprietary network information (CPNI) data:

“Through its negligent and deliberate acts, including inexplicable failures to follow its own Privacy Policy, T-Mobile permitted access to Plaintiffs and Class Members’ CPI and CPNI,” the complaint against T-Mobile reads, referring to “confidential proprietary information” and “customer proprietary network information,” the latter of which includes location data.”

It’s likely that the sale of 911 data is where carriers are in the most hot water, since that’s their most obvious infraction of the law. It’s of course worth pointing out that wireless carriers (and fixed-line ISPs, frankly) have been hoovering up and selling location, clickstream, and a vast ocean of other user data for decades with very few (any?) Congressional lawmakers much caring about it. It’s another example of how Facebook’s cavalier treatment of user data (and government apathy toward meaningful solutions) isn’t just some errant exception — it’s the norm.

Back in 2016, the previous FCC uncharacteristically tried to impose some pretty basic rules that would have gone a long way in preventing these location data scandals by requiring that carriers be more transparent about what data is collected and who it’s sold to. It also required consumers opt in to more sensitive (read: financial, location) data. But telecom lobbyists quickly convinced Congress to obliterate those rules in 2017 using the Congressional Review Act before they could even take effect.

Two years later finds the sector swimming in scandal, and everybody has a dumb look on their faces utterly perplexed as to how we got to this point.

Source: TechDirt

Instagram ‘Star’ Sobs In Viral Video At Idea Of Having To Get A Real Job After Her Account Was Deleted: ‘I Am Not Work Material!

A video of a 21-year-old social media influencer crying hysterically about not wanting to enter the workforce is going viral on the internet.

https://assets.rbl.ms/19379405/1245x700.jpg

In the video, Jessy Taylor of Tampa, Florida, yells at, cries to, and warns her followers against reporting her account. Taylor says her previous account — which had over 100,000 followers — was deleted after people allegedly reported the account for being spam.

What’s going on in the video?

In the video, which she shared on YouTube, Taylor — who now apparently lives in Los Angeles just to be an Instagram star — can be seen sobbing openly at the prospect at having to perform like a normal adult in the real workforce. She adds that her biggest fear is that she’ll end up a drug-addicted prostitute living on the streets if she can’t be an internet star.

“I’m nothing without my following,” she cries. “I am nothing without my following.”

Taylor, who apparently went to college and previously worked at McDonald’s, says she’s in debt and has no skills.

“I want to say to everybody that’s been reporting me, think twice, because you’re ruining my life, because I make all of my money online — all of it! — and I don’t want to lose that,” she wept.

“The people who work 9-5 — that is not me, I am in L.A. to not be like that,” she insisted.

Taylor, who admitted to being a former sex worker, said that she simply couldn’t go back to that life.

“I was a f***ing prostitute,” she admitted. “I used to strip every single day. I don’t even do that s**t anymore because I make all my money online. I don’t want to go back to that life.”

She continued, lamenting her time as a former McDonald’s worker.

“I used to work at McDonald’s before I did YouTube, Instagram, before I had 100,000 followers!” she cried. “Before I had everything in my life I was a f***ing loser!”

She concluded her video with a plea. Or a demand. Or something.

“I’m not work material! I will never be work material! So stop f***ing reporting me on motherf***ing Instagram!” she warned. “The last thing I want to do be is a homeless prostitute in the f***ing street doing meth. That is the last thing I f***ing want to do so stop f***ing trying to ruin my life!”

According to the Daily Mail, Taylor’s other social accounts — Twitter, Facebook — have between 3,000 and 4,000 followers a piece.

Source: by Sarah Taylor | The Blaze

Why It’s Impossible to Hide Your Location from Facebook

An IT teacher noticed Facebook was still posting localized ads even though her phone’s GPS was off for this application. In investigating, she realized that Facebook has other ways to achieve this.

“The Location Controls provided by Facebook give an illusion of control over the data that informs one’s ad experience, not actual control. Moreover, Facebook makes false claims about the effect of controls. This statement – which has the merit of being clear – was made by Aleksandra Korolova, assistant professor of computer science at the Southern California University.

In an article published on Medium, she is surprised to continue to see Facebook’s advertisements based on her geographical position while she has disabled Facebook geolocation. She goes even further since she explains that her profile does not contain the city where she lives, that she has not posted a photo on Facebook for years and that she never publishes content containing her position.

https://www.zerohedge.com/sites/default/files/inline-images/zuckerberg%20trust%20me_2.jpg?itok=OCXLhvAx

Nothing in his behavior betrays his position

She has even disabled geo-location on third-party apps such as WhatsApp, Instagram or Messenger. Finally, she never searches for a geographical criterion in the Facebook application.

So how is it possible that Aleksandra Korolova sees geolocated ads of Santa Monica, the city of California where she lives, and of Los Angeles where she works?

Why did she see ads for activities in Montana while she was crossing Glacier National Park in that state? And why did she see the same kind of geolocated ads when she went to Massachusetts or Israel?

Facebook has many other ways that the only GPS

One could almost believe that this computer scientist is paranoid, except that her doubts are well and truly proven. By going to the “About Facebook Ads” section, she first notes that the social network specifies that the users’ location is also determined by “where you connect to the Internet.

But things go even further when you consult the documents made available to advertisers. A diagram details the processes used by Facebook to measure visits to stores. In addition to the classic geolocation by GPS (which is disabled by Aleksandra Korolova), we see that Facebook collects the Wi-Fi networks that are around the user but also Bluetooth, which can be detected by beacons arranged in the shops.

(Image upload : Unable to contact the server.) — Sorry guys, I will upload it as soon as I can!

IP addresses provide valuable clues

With these data, including connection IP addresses, Facebook does not need more to extrapolate a relatively accurate position, although the geo-location of his device is disabled.

“We use the IP and other information like check-ins or the city mentioned in your profile,” confirmed a Facebook spokesperson to Gizmodo.There is no way to completely disable the location for ads.

Now You Know! If we thought we were hiding our location to Facebook, We Were Wrong. The social network has always a way to know more or less your position. Without forgetting the information that the New York Times revealed 2 weeks ago: how Facebook shares users’ data with its customers, to the point where it can read private messages….

DQmdpsoEfLe5nRg4Q1oKWHNjLdMnAucCYfRou1yF5Yiwrzs.png

Source: vijbzabyss | Steemit

Apple Is Quietly Giving People ‘Trust Scores’ Based On Their iPhone Data

iPhone, iPad, Apple Watch and Apple TV data is used to see how trustworthy you are, similar to a scenario in the dystopian series Black Mirror

https://aim4truthblog.files.wordpress.com/2018/12/get-started-device-trust-score.jpg

Apple has quietly introduced “trust scores” for people based on how they use their iPhones and other devices.

The tech giant, which last month became the first public company to be worth more than $1 trillion, said in an update to its privacy policy that the scores would be determined by tracking the calls and emails made on Apple devices. 

In an update to its privacy, Apple said the rating system could be used to help fight fraud, though specific examples of how this would work were not given.

The provision, first spotted by Venture Beat, appears in an update to the iTunes Store and Privacy page and comes ahead of the release of the iPhone Xs and iPhone Xs Plus on Friday, 21 September.

“To help identify and prevent fraud, information about how you use your device, including the approximate number of phone calls or emails you send and receive, will be used to compute a device trust score when you attempt a purchase,” the page reads. “The submissions are designed so Apple cannot learn the real values on your device. The scores are stored for a fixed time on our servers.”

The method is reminiscent of an episode of the dystopian TV series Black Mirror, in which people are rated on their interactions with other people.

In the episode, Nosedive, the ratings are used to determine a person’s socioeconomic status, affecting their access to healthcare, transport and housing.

The comparison to the episode was noted by people on social media, with some calling it “dangerous”.

https://static.independent.co.uk/s3fs-public/thumbnails/image/2018/09/18/12/iphone.jpg?width=1368&height=912&fit=bounds&format=pjpg&auto=webp&quality=70The new Apple iPhone Xs (L) and iPhone Xs Max (R) are displayed during an Apple special event at the Steve Jobs Theater on September 12, 2018 in Cupertino, California (Justin Sullivan/Getty Images)

The disclosure of the device tracking fits in with Apple’s promise to provide transparency regarding its collection of user date.

The vagueness of the language used in the update, however, means it could be interpreted in a broad and potentially invasive way. It is also unusual that it is applied to Apple TVs, which are unable to make or receive emails or phone calls.

A spokesperson for Apple was not immediately available for comment.

Source: by Anthony Cuthbertson | Independent

 

Facebook is Dying, Here’s Proof

https://whiskeytangotexas.files.wordpress.com/2018/11/e46fe-1zuqxgccwui09vt0imrkoxg.jpegFacebook is not sorry, it’s getting desperate.

The #DeleteFacebook hashtag is alive and well. It’s become an attractive scenario for anyone brave enough to deal with mobile addiction. In fact, we’re deleting the app and deleting our accounts at such a pace, Facebook is fighting back! Yes, you heard me right.

Facebook is now making users wait twice as long to delete their accounts

So you finally realized your life would be better without Facebook, Instagram, Messenger and WhatsApp, until you realize something odd. It’s harder than ever to quit, because now Facebook makes saying goodbye a long-drawn out affair. Of course, Facebook doesn’t want you (their product) to leave.

In a shocking and bizarre move to the product and customer experience it now takes a full month to delete your Facebook account — twice as long as before. That’s not exactly free will or the right to say no and breakup with an app that you may have spent way too much time on in your life.

Facebook Thinks we Should have a “Grace” Period

Facebook thinks it should over-ride our decision by making us wait longer. Facebook really understands trust and consumers it seems. When a user tries to delete their account, it makes them wait for a “grace period” before it is actually deleted.

Facebook must seriously be losing a lot of users in 2018 to pull this stunt. Where’s the strategy, bro? The change comes after months of scandals and PR crises for Facebook, including Cambridge Analytica and the recent hack of 50 million people. That we should delete Facebook’s apps has never been clearer to people under 45.

Facebook thought it could be the gateway to the world and help the world feel closer together; instead it made us the product, harvested our data and sold it to the highest bidder, not to mention scamming brands out of $millions of dollars each year. I may not even exist any more on Facebook but their targeting data on me still does — think about what that means for a while.

Facebook even shared user data with Chinese companies. Facebook handed over “deep access” to user data to 60 other tech companies. Facebook’s laundry list of privacy invasion is downright criminal, and not only should Mark Zuckerberg not be its CEO, he should be held accountable.

Listen, if a data breach of 50 million could cost Facebook $1.6 Billion in fines by the EU, how much should it pay the average user for its crimes in the long run? Facebook’s already breaking the law with breaches of the General Data Protection Regulation (GDPR) in the European Union.

A One-Month Wait to Delete Facebook is Worse than Censorship

So now we are prisoners online it would seem, according to Facebook. Thinking of deleting your Facebook account? Not so fast. Sorry, my friends, it now will take an entire month, up from 14 days before. Even if you delete your account, don’t expect Facebook to put your data in the trash bin. That’s impossible.

Facebook’s trust dilemma is so epic the stock could decline further even as pundits say it’s alright. Just wait until we find how many users leave Facebook’s flagship app by 2020. It’s going to be pretty epic. The change to the deletion time was first noticed by The Verge.

Amazon gives me speed and convenience, and Facebook gives me fraud and imprisonment. Good luck competing in the future of Advertising, Facebook. This is the sort of thing that will make users revive the #deletefacebook campaign, and it certainly makes me upset.

  • When a user decides to delete their Facebook account, it doesn’t actually get deleted straight away. Instead, there’s a “grace period,” in which the account remains inactive but accessible — just in case the user gets cold feet and decides to stay on Facebook after all.
  • Aw, yeah, I really can’t quit, just one more thumb scroll of my legacy feed where nobody I know is active anymore.

Facebook thinks being nostalgic is cool as, historically, that grace period has been 14 days, or two weeks. I bet Mark is nostalgic for the good old days but major failures in leadership, strategy and pivoting have led Facebook into a dead-end path. When the vanity of having billions of users retreats into the past, Facebook doesn’t have a product, because the product has always been you and your data!

Silicon Valley doesn’t take the regulation of AI seriously (because it’s too expensive) and Facebook and YouTube are prime examples of this. When the talent exodus starts like it has for Facebook and Snapchat, it’s pretty serious. There’s no saving a sinking ship that makes it harder to leave.

Don’t be afraid we all have to move on. It’s best to terminate now if you want your data deleted. It’s time to do the unthinkable to lead a higher quality life:

https://whiskeytangotexas.files.wordpress.com/2018/11/b52f5-1e52orrngpnv4nrjlekvrdg.jpeg

  • Delete Facebook
  • Delete Messenger
  • Delete Instagram
  • Delete WhatsApp

Facebook was once a candy treat of the digital dopamine variety for human connection. That era is long gone.

Of course, a longer grace period is also to Facebook’s advantage as it mindf8cks us into thinking it’s still relevant. Twitter is nostalgia, Facebook is just dumb.

If your value is tied to massive numbers of users and those users are leaving you; you have nowhere to go but down. Instagram isn’t no YouTube and WhatsApp isn’t any WeChat. If only they had had the sense to change their CEO, things could have been different. But all good things must come to an end, even the app of many regrets.

Source: by Michael K. Spencer | Medium.com

Facebook Sued By PTSD-Stricken Moderator For Non-stop Exposure To “Rape, Torture, Bestiality, Beheadings, Suicide And Murder”

A Northern California woman hired to review flagged Facebook content has sued the social media giant after she was “exposed to highly toxic, unsafe, and injurious content during her employment as a content moderator at Facebook,” which she says gave her post traumatic stress disorder (PTSD).

Selena Scola moderated content for Facebook as an employee of contractor Pro Unlimited, Inc. between June 2017 and March of this year, according to her complaint. 

“Every day, Facebook users post millions of videos, images, and livestreamed broadcasts of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder,” the lawsuit reads. “To maintain a sanitized platform, maximize its already vast profits, and cultivate its public image, Facebook relies on people like Ms. Scola – known as “content moderators” – to view those posts and remove any that violate the corporation’s terms of use.

“You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off,” one content moderator recently told the Guardian.

According to the lawsuit, Facebook content moderators are asked to review over 10 million potentially rule-breaking posts per weekwith an error rate of less than one percent – and a mission to review all user-reported content within 24 hours. Making the job even more difficult is Facebook Live, a feature that allows users to broadcast video streams on their Facebook pages. 

The Facebook Live feature in particular “provides a platform for users to live stream murder, beheadings, torture, and even their own suicides, including the following:” 

In late April a father killed his 11-month-old daughter and live streamed it before hanging himself. Six days later, Naika Venant, a 14-year-old who lived in a foster home, tied a scarf to a shower’s glass door frame and hung herself. She streamed the whole suicide in real time on Facebook Live. Then in early May, a Georgia teenager took pills and placed a bag over her head in a suicide attempt. She live streamed the attempt on Facebook and survived only because viewers watching the event unfold called police, allowing them to arrive before she died.

As a result of having to review said content, Scola says she “developed and suffers from significant psychological trauma and post-traumatic stress disorder (PTSD)” – however she does not detail the specific imagery she was exposed to for fear of Facebook enforcing a non-disclosure agreement (NDA) she signed. 

Scola is currently the only named plaintiff in the class-action lawsuit, however the lawsuit says that the potential class could include “thousands” of current and former moderators in California. 

As Motherboard reports, moderators have to view a constant flood of information and use their judgement on how to best censor content per Facebook’s “constantly-changing rules.” 

Moderating content is a difficult job—multiple documentaries, longform investigations, and law articles have noted that moderators work long hours, are exposed to disturbing and graphic content, and have the tough task of determining whether a specific piece of content violates Facebook’s sometimes byzantine and constantly-changing rules. Facebook prides itself on accuracy, and with more than 2 billion users, Facebook’s work force of moderators are asked to review millions of possibly infringing posts every day. –Motherboard

“An outsider might not totally comprehend, we aren’t just exposed to the graphic videos—you’ll have to watch them closely, often repeatedly, for specific policy signifiers,” one moderation source told Motherboard. “Someone could be being graphically beaten in a video, and you could have to watch it a dozen times, sometimes with others present, while you decide whether the victim’s actions would count as self-defense or not, or whether the aggressor is the same person who posted the video.” 

The lawsuit also alleges that “Facebook does not provide its content moderators with sufficient training or implement the safety standards it helped develop … Ms. Scola’s PTSD symptoms may be triggered when she touches a computer mouse, enters a cold building, watches violence on television, hears loud noises, or is startled. Her symptoms are also triggered when she recalls or describes graphic imagery she was exposed to as a content moderator.”

Facebook told Motherboard that they are “currently reviewing the claim.”

We recognize that this work can often be difficult. That is why we take the support of our content moderators incredibly seriously, starting with their training, the benefits they receive, and ensuring that every person reviewing Facebook content is offered psychological support and wellness resources,” the spokesperson said. “Facebook employees receive these in house and we also require companies that we partner with for content review to provide resources and psychological support, including onsite counseling—available at the location where the plaintiff worked—and other wellness resources like relaxation areas at many of our larger facilities.”

“This job is not for everyone, candidly, and we recognize that,” Brian Doegan, Facebook’s director of global training, community operations, told Motherboard in June. He said that new hires are gradually exposed to graphic content to “so we don’t just radically expose you, but rather we do have a conversation about what it is, and what we’re going to be seeing.” 

Doegan said that there are rooms in each office that are designed to help employees de-stress. –Motherboard

“What I admire is that at any point in this role, you have access to counselors, you have access to having conversations with other people,” he said. “There’s actual physical environments where you can go into, if you want to just kind of chillax, or if you want to go play a game, or if you just want to walk away, you know, be by yourself, that support system is pretty robust, and that is consistent across the board.”

Read the lawsuit below: 

Source: ZeroHedge