Category Archives: social media

Why It’s Impossible to Hide Your Location from Facebook

An IT teacher noticed Facebook was still posting localized ads even though her phone’s GPS was off for this application. In investigating, she realized that Facebook has other ways to achieve this.

“The Location Controls provided by Facebook give an illusion of control over the data that informs one’s ad experience, not actual control. Moreover, Facebook makes false claims about the effect of controls. This statement – which has the merit of being clear – was made by Aleksandra Korolova, assistant professor of computer science at the Southern California University.

In an article published on Medium, she is surprised to continue to see Facebook’s advertisements based on her geographical position while she has disabled Facebook geolocation. She goes even further since she explains that her profile does not contain the city where she lives, that she has not posted a photo on Facebook for years and that she never publishes content containing her position.

https://www.zerohedge.com/sites/default/files/inline-images/zuckerberg%20trust%20me_2.jpg?itok=OCXLhvAx

Nothing in his behavior betrays his position

She has even disabled geo-location on third-party apps such as WhatsApp, Instagram or Messenger. Finally, she never searches for a geographical criterion in the Facebook application.

So how is it possible that Aleksandra Korolova sees geolocated ads of Santa Monica, the city of California where she lives, and of Los Angeles where she works?

Why did she see ads for activities in Montana while she was crossing Glacier National Park in that state? And why did she see the same kind of geolocated ads when she went to Massachusetts or Israel?

Facebook has many other ways that the only GPS

One could almost believe that this computer scientist is paranoid, except that her doubts are well and truly proven. By going to the “About Facebook Ads” section, she first notes that the social network specifies that the users’ location is also determined by “where you connect to the Internet.

But things go even further when you consult the documents made available to advertisers. A diagram details the processes used by Facebook to measure visits to stores. In addition to the classic geolocation by GPS (which is disabled by Aleksandra Korolova), we see that Facebook collects the Wi-Fi networks that are around the user but also Bluetooth, which can be detected by beacons arranged in the shops.

(Image upload : Unable to contact the server.) — Sorry guys, I will upload it as soon as I can!

IP addresses provide valuable clues

With these data, including connection IP addresses, Facebook does not need more to extrapolate a relatively accurate position, although the geo-location of his device is disabled.

“We use the IP and other information like check-ins or the city mentioned in your profile,” confirmed a Facebook spokesperson to Gizmodo.There is no way to completely disable the location for ads.

Now You Know! If we thought we were hiding our location to Facebook, We Were Wrong. The social network has always a way to know more or less your position. Without forgetting the information that the New York Times revealed 2 weeks ago: how Facebook shares users’ data with its customers, to the point where it can read private messages….

DQmdpsoEfLe5nRg4Q1oKWHNjLdMnAucCYfRou1yF5Yiwrzs.png

Source: vijbzabyss | Steemit

Advertisements

Apple Is Quietly Giving People ‘Trust Scores’ Based On Their iPhone Data

iPhone, iPad, Apple Watch and Apple TV data is used to see how trustworthy you are, similar to a scenario in the dystopian series Black Mirror

https://aim4truthblog.files.wordpress.com/2018/12/get-started-device-trust-score.jpg?w=532&h=949

Apple has quietly introduced “trust scores” for people based on how they use their iPhones and other devices.

The tech giant, which last month became the first public company to be worth more than $1 trillion, said in an update to its privacy policy that the scores would be determined by tracking the calls and emails made on Apple devices. 

In an update to its privacy, Apple said the rating system could be used to help fight fraud, though specific examples of how this would work were not given.

The provision, first spotted by Venture Beat, appears in an update to the iTunes Store and Privacy page and comes ahead of the release of the iPhone Xs and iPhone Xs Plus on Friday, 21 September.

“To help identify and prevent fraud, information about how you use your device, including the approximate number of phone calls or emails you send and receive, will be used to compute a device trust score when you attempt a purchase,” the page reads. “The submissions are designed so Apple cannot learn the real values on your device. The scores are stored for a fixed time on our servers.”

The method is reminiscent of an episode of the dystopian TV series Black Mirror, in which people are rated on their interactions with other people.

In the episode, Nosedive, the ratings are used to determine a person’s socioeconomic status, affecting their access to healthcare, transport and housing.

The comparison to the episode was noted by people on social media, with some calling it “dangerous”.

https://static.independent.co.uk/s3fs-public/thumbnails/image/2018/09/18/12/iphone.jpg?width=1368&height=912&fit=bounds&format=pjpg&auto=webp&quality=70The new Apple iPhone Xs (L) and iPhone Xs Max (R) are displayed during an Apple special event at the Steve Jobs Theater on September 12, 2018 in Cupertino, California (Justin Sullivan/Getty Images)

The disclosure of the device tracking fits in with Apple’s promise to provide transparency regarding its collection of user date.

The vagueness of the language used in the update, however, means it could be interpreted in a broad and potentially invasive way. It is also unusual that it is applied to Apple TVs, which are unable to make or receive emails or phone calls.

A spokesperson for Apple was not immediately available for comment.

Source: by Anthony Cuthbertson | Independent

 

Facebook is Dying, Here’s Proof

https://whiskeytangotexas.files.wordpress.com/2018/11/e46fe-1zuqxgccwui09vt0imrkoxg.jpeg?w=623&h=350Facebook is not sorry, it’s getting desperate.

The #DeleteFacebook hashtag is alive and well. It’s become an attractive scenario for anyone brave enough to deal with mobile addiction. In fact, we’re deleting the app and deleting our accounts at such a pace, Facebook is fighting back! Yes, you heard me right.

Facebook is now making users wait twice as long to delete their accounts

So you finally realized your life would be better without Facebook, Instagram, Messenger and WhatsApp, until you realize something odd. It’s harder than ever to quit, because now Facebook makes saying goodbye a long-drawn out affair. Of course, Facebook doesn’t want you (their product) to leave.

In a shocking and bizarre move to the product and customer experience it now takes a full month to delete your Facebook account — twice as long as before. That’s not exactly free will or the right to say no and breakup with an app that you may have spent way too much time on in your life.

Facebook Thinks we Should have a “Grace” Period

Facebook thinks it should over-ride our decision by making us wait longer. Facebook really understands trust and consumers it seems. When a user tries to delete their account, it makes them wait for a “grace period” before it is actually deleted.

Facebook must seriously be losing a lot of users in 2018 to pull this stunt. Where’s the strategy, bro? The change comes after months of scandals and PR crises for Facebook, including Cambridge Analytica and the recent hack of 50 million people. That we should delete Facebook’s apps has never been clearer to people under 45.

Facebook thought it could be the gateway to the world and help the world feel closer together; instead it made us the product, harvested our data and sold it to the highest bidder, not to mention scamming brands out of $millions of dollars each year. I may not even exist any more on Facebook but their targeting data on me still does — think about what that means for a while.

Facebook even shared user data with Chinese companies. Facebook handed over “deep access” to user data to 60 other tech companies. Facebook’s laundry list of privacy invasion is downright criminal, and not only should Mark Zuckerberg not be its CEO, he should be held accountable.

Listen, if a data breach of 50 million could cost Facebook $1.6 Billion in fines by the EU, how much should it pay the average user for its crimes in the long run? Facebook’s already breaking the law with breaches of the General Data Protection Regulation (GDPR) in the European Union.

A One-Month Wait to Delete Facebook is Worse than Censorship

So now we are prisoners online it would seem, according to Facebook. Thinking of deleting your Facebook account? Not so fast. Sorry, my friends, it now will take an entire month, up from 14 days before. Even if you delete your account, don’t expect Facebook to put your data in the trash bin. That’s impossible.

Facebook’s trust dilemma is so epic the stock could decline further even as pundits say it’s alright. Just wait until we find how many users leave Facebook’s flagship app by 2020. It’s going to be pretty epic. The change to the deletion time was first noticed by The Verge.

Amazon gives me speed and convenience, and Facebook gives me fraud and imprisonment. Good luck competing in the future of Advertising, Facebook. This is the sort of thing that will make users revive the #deletefacebook campaign, and it certainly makes me upset.

  • When a user decides to delete their Facebook account, it doesn’t actually get deleted straight away. Instead, there’s a “grace period,” in which the account remains inactive but accessible — just in case the user gets cold feet and decides to stay on Facebook after all.
  • Aw, yeah, I really can’t quit, just one more thumb scroll of my legacy feed where nobody I know is active anymore.

Facebook thinks being nostalgic is cool as, historically, that grace period has been 14 days, or two weeks. I bet Mark is nostalgic for the good old days but major failures in leadership, strategy and pivoting have led Facebook into a dead-end path. When the vanity of having billions of users retreats into the past, Facebook doesn’t have a product, because the product has always been you and your data!

Silicon Valley doesn’t take the regulation of AI seriously (because it’s too expensive) and Facebook and YouTube are prime examples of this. When the talent exodus starts like it has for Facebook and Snapchat, it’s pretty serious. There’s no saving a sinking ship that makes it harder to leave.

Don’t be afraid we all have to move on. It’s best to terminate now if you want your data deleted. It’s time to do the unthinkable to lead a higher quality life:

https://whiskeytangotexas.files.wordpress.com/2018/11/b52f5-1e52orrngpnv4nrjlekvrdg.jpeg?w=249&h=155

  • Delete Facebook
  • Delete Messenger
  • Delete Instagram
  • Delete WhatsApp

Facebook was once a candy treat of the digital dopamine variety for human connection. That era is long gone.

Of course, a longer grace period is also to Facebook’s advantage as it mindf8cks us into thinking it’s still relevant. Twitter is nostalgia, Facebook is just dumb.

If your value is tied to massive numbers of users and those users are leaving you; you have nowhere to go but down. Instagram isn’t no YouTube and WhatsApp isn’t any WeChat. If only they had had the sense to change their CEO, things could have been different. But all good things must come to an end, even the app of many regrets.

Source: by Michael K. Spencer | Medium.com

Facebook Sued By PTSD-Stricken Moderator For Non-stop Exposure To “Rape, Torture, Bestiality, Beheadings, Suicide And Murder”

A Northern California woman hired to review flagged Facebook content has sued the social media giant after she was “exposed to highly toxic, unsafe, and injurious content during her employment as a content moderator at Facebook,” which she says gave her post traumatic stress disorder (PTSD).

Selena Scola moderated content for Facebook as an employee of contractor Pro Unlimited, Inc. between June 2017 and March of this year, according to her complaint. 

“Every day, Facebook users post millions of videos, images, and livestreamed broadcasts of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder,” the lawsuit reads. “To maintain a sanitized platform, maximize its already vast profits, and cultivate its public image, Facebook relies on people like Ms. Scola – known as “content moderators” – to view those posts and remove any that violate the corporation’s terms of use.

“You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off,” one content moderator recently told the Guardian.

According to the lawsuit, Facebook content moderators are asked to review over 10 million potentially rule-breaking posts per weekwith an error rate of less than one percent – and a mission to review all user-reported content within 24 hours. Making the job even more difficult is Facebook Live, a feature that allows users to broadcast video streams on their Facebook pages. 

The Facebook Live feature in particular “provides a platform for users to live stream murder, beheadings, torture, and even their own suicides, including the following:” 

In late April a father killed his 11-month-old daughter and live streamed it before hanging himself. Six days later, Naika Venant, a 14-year-old who lived in a foster home, tied a scarf to a shower’s glass door frame and hung herself. She streamed the whole suicide in real time on Facebook Live. Then in early May, a Georgia teenager took pills and placed a bag over her head in a suicide attempt. She live streamed the attempt on Facebook and survived only because viewers watching the event unfold called police, allowing them to arrive before she died.

As a result of having to review said content, Scola says she “developed and suffers from significant psychological trauma and post-traumatic stress disorder (PTSD)” – however she does not detail the specific imagery she was exposed to for fear of Facebook enforcing a non-disclosure agreement (NDA) she signed. 

Scola is currently the only named plaintiff in the class-action lawsuit, however the lawsuit says that the potential class could include “thousands” of current and former moderators in California. 

As Motherboard reports, moderators have to view a constant flood of information and use their judgement on how to best censor content per Facebook’s “constantly-changing rules.” 

Moderating content is a difficult job—multiple documentaries, longform investigations, and law articles have noted that moderators work long hours, are exposed to disturbing and graphic content, and have the tough task of determining whether a specific piece of content violates Facebook’s sometimes byzantine and constantly-changing rules. Facebook prides itself on accuracy, and with more than 2 billion users, Facebook’s work force of moderators are asked to review millions of possibly infringing posts every day. –Motherboard

“An outsider might not totally comprehend, we aren’t just exposed to the graphic videos—you’ll have to watch them closely, often repeatedly, for specific policy signifiers,” one moderation source told Motherboard. “Someone could be being graphically beaten in a video, and you could have to watch it a dozen times, sometimes with others present, while you decide whether the victim’s actions would count as self-defense or not, or whether the aggressor is the same person who posted the video.” 

The lawsuit also alleges that “Facebook does not provide its content moderators with sufficient training or implement the safety standards it helped develop … Ms. Scola’s PTSD symptoms may be triggered when she touches a computer mouse, enters a cold building, watches violence on television, hears loud noises, or is startled. Her symptoms are also triggered when she recalls or describes graphic imagery she was exposed to as a content moderator.”

Facebook told Motherboard that they are “currently reviewing the claim.”

We recognize that this work can often be difficult. That is why we take the support of our content moderators incredibly seriously, starting with their training, the benefits they receive, and ensuring that every person reviewing Facebook content is offered psychological support and wellness resources,” the spokesperson said. “Facebook employees receive these in house and we also require companies that we partner with for content review to provide resources and psychological support, including onsite counseling—available at the location where the plaintiff worked—and other wellness resources like relaxation areas at many of our larger facilities.”

“This job is not for everyone, candidly, and we recognize that,” Brian Doegan, Facebook’s director of global training, community operations, told Motherboard in June. He said that new hires are gradually exposed to graphic content to “so we don’t just radically expose you, but rather we do have a conversation about what it is, and what we’re going to be seeing.” 

Doegan said that there are rooms in each office that are designed to help employees de-stress. –Motherboard

“What I admire is that at any point in this role, you have access to counselors, you have access to having conversations with other people,” he said. “There’s actual physical environments where you can go into, if you want to just kind of chillax, or if you want to go play a game, or if you just want to walk away, you know, be by yourself, that support system is pretty robust, and that is consistent across the board.”

Read the lawsuit below: 

Source: ZeroHedge