Tag Archives: Artificial Intelligence

Amazon Facial Recognition Can Now Detect Your Fear

Amazon this week said that it’s Rekognition facial recognition software can now detect a person’s fear, according to CNBC.

Arun Sansar AFP Getty Images

As one of several Amazon Web Services (AWS) cloud services, Rekognition can be used for facial analysis or sentiment analysis by identifying different expressions and predicting emotions based on images of people’s faces. The system uses AI to ‘learn’ as it compiles data. 

The tech giant revealed updates to the controversial tool on Monday that include improving the accuracy and functionality of its face analysis features such as identifying gender, emotions and age range.

“With this release, we have further improved the accuracy of gender identification,” Amazon said in a blog post. “In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear.’” CNBC

AI researchers at Microsoft, Kairos, Affectiva and others have spent considerable time and resources trying to read a person’s emotions based on their facial expressions, movements, voice and other factors.

That said, some experts have noted that people react and communicate differently based on culture and situation – which means that similar facial expressions and movements can convey more than one category of emotions. As such, researchers have warned “it is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be scientific facts,” according to the report. 

CNBC notes that Rekognition has faced criticism for its use by law enforcement agencies, as well as a reported pitch to Immigration and Customs Enforcement, and that it has been used by organizations that work with law enforcement.

Source: ZeroHedge

All Four Major Wireless Carriers Hit With Lawsuits Over Sharing, Selling Location Data

(TechDirt) We’ve noted repeatedly that if you’re upset about Facebook’s privacy scandals, you should be equally concerned about the wireless industry’s ongoing location data scandals. Not only were the major carriers caught selling your location data to any nitwit with a checkbook, they were even found to be selling your E-911 location data, which provides even more granular detail about your data than GPS provides. This data was then found to have been widely abused from everybody from law enforcement to randos pretending to be law enforcement.

Throughout all this, the Ajit Pai FCC has done absolutely nothing to seriously police the problem. Meaning that while carriers have promised to stop collecting and selling this data, nobody has bothered to force carriers to actually confirm this. Given telecom’s history when it comes to consumer privacy, somebody might just want to double check their math (and ask what happened to all that data already collected and sold over the last decade).

Compounding carrier problems, all four major wireless carriers last week were hit with a class action lawsuit (correctly) noting the carriers had violated Section 222 of the Federal Communications Act by selling consumer proprietary network information (CPNI) data:

“Through its negligent and deliberate acts, including inexplicable failures to follow its own Privacy Policy, T-Mobile permitted access to Plaintiffs and Class Members’ CPI and CPNI,” the complaint against T-Mobile reads, referring to “confidential proprietary information” and “customer proprietary network information,” the latter of which includes location data.”

It’s likely that the sale of 911 data is where carriers are in the most hot water, since that’s their most obvious infraction of the law. It’s of course worth pointing out that wireless carriers (and fixed-line ISPs, frankly) have been hoovering up and selling location, clickstream, and a vast ocean of other user data for decades with very few (any?) Congressional lawmakers much caring about it. It’s another example of how Facebook’s cavalier treatment of user data (and government apathy toward meaningful solutions) isn’t just some errant exception — it’s the norm.

Back in 2016, the previous FCC uncharacteristically tried to impose some pretty basic rules that would have gone a long way in preventing these location data scandals by requiring that carriers be more transparent about what data is collected and who it’s sold to. It also required consumers opt in to more sensitive (read: financial, location) data. But telecom lobbyists quickly convinced Congress to obliterate those rules in 2017 using the Congressional Review Act before they could even take effect.

Two years later finds the sector swimming in scandal, and everybody has a dumb look on their faces utterly perplexed as to how we got to this point.

Source: TechDirt

Google’s Medical Brain Team Is Training Machines to Predict When Patients Will Die

https://i0.wp.com/journal.jp.fujitsu.com/en/2017/11/29/01/img/20171129_01_index_pic_1.jpg(image source)

(Bloomberg) — A woman with late-stage breast cancer came to a city hospital, fluids already flooding her lungs. She saw two doctors and got a radiology scan. The hospital’s computers read her vital signs and estimated a 9.3 percent chance she would die during her stay.

Then came Google’s turn. An new type of algorithm created by the company read up on the woman — 175,639 data points — and rendered its assessment of her death risk: 19.9 percent. She passed away in a matter of days.

The harrowing account of the unidentified woman’s death was published by Google in May in research highlighting the health-care potential of neural networks, a form of artificial intelligence software that’s particularly good at using data to automatically learn and improve. Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances they will soon die.

What impressed medical experts most was Google’s ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. And it did it far faster and more accurately than existing techniques. Google’s system even showed which records led it to conclusions.

Hospitals, doctors and other health-care providers have been trying for years to better use stockpiles of electronic health records and other patient data. More information shared and highlighted at the right time could save lives — and at the very least help medical workers spend less time on paperwork and more time on patient care. But current methods of mining health data are costly, cumbersome and time consuming.

As much as 80 percent of the time spent on today’s predictive models goes to the “scut work” of making the data presentable, said Nigam Shah, an associate professor at Stanford University, who co-authored Google’s research paper, published in the journal Nature. Google’s approach avoids this. “You can throw in the kitchen sink and not have to worry about it,” Shah said.

Google’s next step is moving this predictive system into clinics, AI chief Jeff Dean told Bloomberg News in May. Dean’s health research unit — sometimes referred to as Medical Brain — is working on a slew of AI tools that can predict symptoms and disease with a level of accuracy that is being met with hope as well as alarm.

Inside the company, there’s a lot of excitement about the initiative. “They’ve finally found a new application for AI that has commercial promise,” one Googler says. Since Alphabet Inc.’s Google declared itself an “AI-first” company in 2016, much of its work in this area has gone to improve existing internet services. The advances coming from the Medical Brain team give Google the chance to break into a brand new market — something co-founders Larry Page and Sergey Brin have tried over and over again.

Software in health care is largely coded by hand these days. In contrast, Google’s approach, where machines learn to parse data on their own, “can just leapfrog everything else,” said Vik Bajaj, a former executive at Verily, an Alphabet health-care arm, and managing director of investment firm Foresite Capital. “They understand what problems are worth solving,” he said. “They’ve now done enough small experiments to know exactly what the fruitful directions are.”

Dean envisions the AI system steering doctors toward certain medications and diagnoses. Another Google researcher said existing models miss obvious medical events, including whether a patient had prior surgery. The person described existing hand-coded models as “an obvious, gigantic roadblock” in health care. The person asked not to be identified discussing work in progress.

For all the optimism over Google’s potential, harnessing AI to improve health-care outcomes remains a huge challenge. Other companies, notably IBM’s Watson unit, have tried to apply AI to medicine but have had limited success saving money and integrating the technology into reimbursement systems.

Google has long sought access to digital medical records, also with mixed results. For its recent research, the internet giant cut deals with the University of California, San Francisco, and the University of Chicago for 46 billion pieces of anonymous patient data. Google’s AI system created predictive models for each hospital, not one that parses data across the two, a harder problem. A solution for all hospitals would be even more challenging. Google is working to secure new partners for access to more records.

A deeper dive into health would only add to the vast amounts of information Google already has on us. “Companies like Google and other tech giants are going to have a unique, almost monopolistic, ability to capitalize on all the data we generate,” said Andrew Burt, chief privacy officer for data company Immuta. He and pediatric oncologist Samuel Volchenboum wrote a recent column arguing governments should prevent this data from becoming “the province of only a few companies,” like in online advertising where Google reigns.

Google is treading carefully when it comes to patient information, particularly as public scrutiny over data-collection rises. Last year, British regulators slapped DeepMind, another Alphabet AI lab, for testing an app that analyzed public medical records without telling patients that their information would be used like this. With the latest study, Google and its hospital partners insist their data is anonymous, secure and used with patient permission. Volchenboum said the company may have a more difficult time maintaining that data rigor if it expands to smaller hospitals and health-care networks.

Still, Volchenboum believes these algorithms could save lives and money. He hopes health records will be mixed with a sea of other stats. Eventually, AI models could include information on local weather and traffic — other factors that influence patient outcomes. “It’s almost like the hospital is an organism,” he said.

Few companies are better poised to analyze this organism than Google. The company and its Alphabet cousin, Verily, are developing devices to track far more biological signals. Even if consumers don’t take up wearable health trackers en masse, Google has plenty of other data wells to tap. It knows the weather and traffic. Google’s Android phones track things like how people walk, valuable information for measuring mental decline and some other ailments. All that could be thrown into the medical algorithmic soup.

Medical records are just part of Google’s AI health-care plans. Its Medical Brain has unfurled AI systems for radiology, ophthalmology and cardiology. They’re flirting with dermatology, too. Staff created an app for spotting malignant skin lesions; a product manager walks around the office with 15 fake tattoos on her arms to test it.

Dean, the AI boss, stresses this experimentation relies on serious medical counsel, not just curious software coders. Google is starting a new trial in India that uses its AI software to screen images of eyes for early signs of a condition called diabetic retinopathy. Before releasing it, Google had three retinal specialists furiously debate the early research results, Dean said.

Over time, Google could license these systems to clinics, or sell them through the company’s cloud-computing division as a sort of diagnostics-as-a-service. Microsoft Corp., a top cloud rival, is also working on predictive AI services. To commercialize an offering, Google would first need to get its hands on more records, which tend to vary widely across health providers. Google could buy them, but that may not sit as well with regulators or consumers. The deals with UCSF and the University of Chicago aren’t commercial.

For now, the company says it’s too early to settle on a business model. At Google’s annual developer conference in May, Lily Peng, a member of Medical Brain, walked through the team’s research outmatching humans in spotting heart disease risk. “Again,” she said. “I want to emphasize that this is really early on.”

Source: by Mark Bergen | Bloomberg Quint

Ominous Video Shows Swarm Of AI Drones Carry Out Mass Murder

https://i0.wp.com/thefreethoughtproject.com/wp-content/uploads/2017/11/ai-drones.jpg

Imagine a world in which people who hold a certain political ideology could be sought out and summarily executed with surgical precision using autonomous microdrones and with no collateral damage. While this may seem like a great way to eliminate terrorist organizations like ISIS or Al-Qaeda, it could also be used by them to wipe you out.

As the Future of Live Institute points out, Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

All the technology to assemble these machines which would carry out autonomous assassination exists right now. However, as far as the public is concerned, it has not yet been implemented to the point we see in the ominous video below which was created by the Future of Life Institute to raise awareness to this encroaching dystopian reality.

If the above video made you feel uneasy, it should. Knowing the horrifying lengths to which governments and terrorist organizations will go to implement their will, the idea of easily wiping out one’s political foes is like a wet dream to the warmongering elite.

Indeed, these weapons have long been the wet dreams of warmongers and are outlined in their own documents which detail fleets of robots that “can fit inside a soldier’s pocket” and weapons that can “target specific genotypes.” The Project for a New American Century, the neocon think tank founded by William Kristol and Robert Kagan in 1997 described these weapons in their report, written in September 2000, titled, Rebuilding America’s Defenses.

On land, the clash of massive, combined-arms armored forces may be replaced by the dashes of much lighter, stealthier and information-intensive forces, augmented by fleets of robots, some small enough to fit in soldiers’ pockets…..And advanced forms of biological warfare that can “target” specific genotypes may transform biological warfare from the realm of terror to a politically useful tool.

The swarming drones—as fictionally portrayed in the video above—are real. In fact, as TFTP reported last year, Defense Secretary Ashton Carter disclosed the existence of Micro-drones that can be launched from the flare dispensers of F-16s and F/A-18 fighter jets traveling at the speed of sound. As the drones are released, tiny propellers provide them with the propulsion they need to find each other and create a swarm.

While the idea of weaponized swarming micro-robots deployed by one of the most destructive governments in the history of the world is ominous and horrifying, the war-mongering bureaucrats in D.C. are actually patting themselves on the back and released video of their robots to justify stealing even more of your tax dollars to build them.

After the release of the video, Defense Secretary Ashton Carter called for $902 million in funding for SCO in 2017 — nearly twice what it received this year, and 18 times what it started with — and he got it.

If you thought the fictional video above was frightening, wait until you watch this very real demonstration of concept video below.

Given that the US government never reveals everything that it is working on, we can safely assume—given the information that they have already released—that some version of these killer robots exists right now.

Source: The Free Thought Project