Home » Posts tagged 'AI'

Tag Archives: AI

Olduvai
Click on image to purchase

Olduvai III: Catacylsm
Click on image to purchase

Post categories

Normal Intrusions: Globalising AI Surveillance

Normal Intrusions: Globalising AI Surveillance

They all do it: corporations, regimes, authorities.  They all have the same reasons: efficiency, serviceability, profitability, all under the umbrella term of “security”.  Call it surveillance, or call it monitoring the global citizenry; it all comes down to the same thing.  You are being watched for your own good, and such instances should be regarded as a norm.

Given the weaknesses of international law and the general hiccupping that accompanies efforts to formulate a global right to privacy, few such restrictions, or problems, preoccupy those in surveillance.  The entire business is burgeoning, a viral complex that does not risk any abatement.

The Carnegie Endowment for International Peace has released an unnerving report confirming that fact, though irritatingly using an index in doing so.  Its focus is Artificial Intelligence (AI) technology.  A definition of sorts is offered for AI, being “an integrated system that incorporates information acquisition objectives, logical reasoning principles, and self-correction capacities.”

When stated like that, the whole matter seems benign.  Machine learning, for instance, “analyses a large amount of information in order to discern a pattern to explain the current data and predict future uses.”

There are several perturbing highlights supplied by the report’s author, Steven Feldstein.  The relationship between military expenditure and states’ use of AI surveillance systems is noted, with “forty of the world’s top fifty military spending countries (based on cumulative military expenditures) also [using] AI surveillance technology.”  Across 176 countries, data gathered since 2017 shows that AI surveillance technologies are not merely good domestic fare but a thriving export business.

The ideological bent of the regime in question is no bar to the use of such surveillance.  Liberal democracies are noted as major users, with 51 percent of “advanced democracies” doing so.  That number, interestingly enough, is less than “closed autocratic states” (37 percent); “electoral autocratic/competitive autocratic states” (41 percent) and “electoral democracies/illiberal democracies” (41 percent). 

 …click on the above link to read the rest of the article…

Snowden Spills: Infamous Whistleblower Opines On Spycraft, AI, And Being Suicided

Snowden Spills: Infamous Whistleblower Opines On Spycraft, AI, And Being Suicided

Edward Snowden has finally laid it all out – documenting his memoires in a new 432-page book, Permanent Record, which will be published worldwide on Tuesday, September 17. 

Meeting with both The Guardian and Spiegel Online in Moscow as part of its promotion, the infamous whistleblower spent nearly five hours with the two media outlets – offering a taste of what’s in the book, details on his background, and his thoughts on artificial intelligence, facial recognition, and other intelligence gathering tools coming to a dystopia near you. 

While The Guardian interview is ‘okay,’ scroll down for the far more interesting Spiegel interview, where Snowden goes way deeper into his cloak-and-dagger life, including thoughts on getting suicided. 

Manufacturing Slump May Spill Over Into Services: Continuum Economics

First, The Guardian:

Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK’s secret communication headquarters, GCHQ.

He describes the 18 years since the September 11 attacks as “a litany of American destruction by way of American self-destruction, with the promulgation of secret policies, secret laws, secret courts and secret wars”.

Snowden also said: “The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition.

An AI-equipped surveillance camera would be not a mere recording device, but could be made into something closer to an automated police officer.”  –The Guardian

Other notables from the Guardian interview: 

 …click on the above link to read the rest of the article…

When software rules the world

When software rules the world

I was a young boy when elevator operators still closed those see-through, metal accordion interior elevator doors by hand and then moved the elevator up or down by rotating a knob on a wheel embedded in the elevator wall.

Within a few years all those operators were gone, replaced by numbered buttons on the elevator wall. Today, so many activities that used to be mediated by human judgement are now governed by algorithms inside automated systems.

Apart from the implications for elevator operators and others displaced by such technology, there is the question of transparency. It’s easy to determine visually whether an elevator door is open and the elevator is level with the floor you’re on so that you can safely exit.

As the world has seen to its horror twice recently, it’s harder to know whether software on a Boeing 737 MAX is giving you the right information and doing the right thing while you are in mid-flight.

Yet, more and more of our lives are being turned over to software. And, despite the toll in lives; stolen identities; computer breaches and cybercrime; and even the threat that organized military cyberwar units now pose to critical water and electricity infrastructure—despite all that, the public, industry and government remain in thrall to the idea that we should turn over more and more control of our lives to software.

Of course, software can do some things much better and faster than humans: vast computations and complex modeling; control of large-scale processes such as oil refining and air transportation routes; precision manufacturing by robots and many, many other tasks. We use software to tell machines to do repetitive and mundane tasks (utilizing their enormous capacity and speed) and to give us insight into the highly complex, for example, climate change.

 …click on the above link to read the rest of the article…

Creepy Billboards Track Consumers With AI Cameras That Target Ads Based On Mood

Creepy Billboards Track Consumers With AI Cameras That Target Ads Based On Mood 

The Sunday Times discovered dozens of billboards with cameras and facial detection software are targeting consumers at shopping malls across the country with personalized ads.

The report found 50 advertising screens with facial detection technology that identifies the age, gender, and mood of consumers, and even monitors their view time behavior of the personalized ads. For instance, millennial men could be standing in front of the billboard; seconds later, a Gillette shaving cream ad is displayed on a giant screen.

Advertisers operating the new system have claimed it fully complies with the Data Protection Act 2018 because no consumer is identified, nor is their data collected or stored.

UK law indicates there is no legal requirement to tell shoppers that they’re being surveilled for commercial purposes.

Ocean Outdoor is the first advertiser to adopt public surveillance technology for smart billboards.

Called the LookOut system, it uses artificial intelligence and cameras to serve adverts to consumers based on gender, age, facial hair, eyewear, mood, and engagement level, stated the Ocean Outdoor’s website.

The company lists several ways in which LookOut can be used:

  • Optimization – delivering the appropriate creative to the right audience at the right time.
  • Visualize – Gaze recognition to trigger creative or an interactive experience
  • AR Enabled – Using the HD cameras to create an augmented reality mirror or window effect, creating deep consumer engagement via the latest technology
  • Analytics – Understanding your brand’s audience, post-campaign analysis and creative testing

Ocean Outdoor’s chief executive Tim Bleakley told The Sunday Times: “We pioneered a facial detection technology which identifies the characteristics of the face to allow you to talk to advertisers about mood, gender, emotion and those kind of things.”

“We can measure the level of happiness or sadness. We can measure the dwell time.”

 …click on the above link to read the rest of the article…

We’re All Being Judged By A Secret ‘Trustworthiness’ Score

We’re All Being Judged By A Secret ‘Trustworthiness’ Score

Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal

More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 – 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.

From the Sift website: “Each time we get an event — be it a page view or an API event — we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.” 

The system is similar to a credit score – except there’s no way to find out your own Sift score

Factors which contribute to one’s Sift score (per the WSJ): 

• Is the account new?

• Are there are a lot of digits at the end of an email address?

• Is the transaction coming from an IP address that’s unusual for your account?

• Is the transaction coming from a region where there are a lot of hackers, such as China, Russia or Eastern Europe?

• Is the transaction coming from an anonymization network?

• Is the transaction happening at an odd time of day?

• Has the credit card being used had chargebacks associated with it?

• Is the browser different from what you typically use?

• Is the device different from what you typically use?

 …click on the above link to read the rest of the article…

Are we sleepwalking into an AI police state?

Are we sleepwalking into an AI police state?

Predictive analytics enabling law enforcement to identify “high-risk” areas has highlighted ethical and legal quandaries

Science fiction has long speculated on the danger of a dystopian future and machines powered by artificial intelligence (AI). But with the advent of big data, we no longer need to speculate: the future has arrived. By the end of March, West Midlands Police is due to finish a prototype for the National Data Analytics Solution (NDAS), an AI system designed to predict the risk of where crime will be committed and by whom. NDAS could eventually be rolled out by every police force in the UK.

Ultimately, we need to be able to choose as a society how we use these technologies and what kind of society we really want to be

Fourteen police forces around the UK have used or planned to use such tools. But a report published in February by human rights group Liberty warns that far from being objective, police crime-mapping software reinforces pre-existing biases about who commits crime.

Current mapping tools use past crime data to identify so-called high-risk areas, leading to more intensive patrolling. Yet these areas are often already subject to disproportionate over-policing. By relying on data from police practices, according to Liberty’s advocacy director Corey Stoughton, these tools might simply “entrench discrimination against black and minority-ethnic people”.

Police mapping tools turning citizens into suspects

Ben Hayes, a data protection and ethics adviser to the European Union, United Nations and other international organisations, warns that the increased use of such mapping tools is increasingly turning ordinary citizens into suspects.

“People can be categorised as vulnerable, at risk, threatening, deserving or undeserving,” says Dr Hayes, noting that this tends to target those already marginalised. “Services such as border control, policing and social welfare are all subject to inherent bias. Machine-learning doesn’t eliminate those biases, it amplifies and reinforces them.”

 …click on the above link to read the rest of the article…

China Programming AI Drones To Autonomously Murder Without Human Input

China Programming AI Drones To Autonomously Murder Without Human Input

China is programming new autonomous AI-powered drones to conduct “targeted military strikes” without a human making the decision to fire, according to a new report by the Center for a New American Security, a US national security think tank. 

Authored by Gregory C. Allen, the report is a comprehensive look at Chinese AI (and American officials’ underestimation of it). Allen notes that drones are becoming increasingly automated as designers integrate sophisticated AI systems into the decision-making processes for next-generation reconnaissance and weapons systems. Before writing his analysis, Allen participated in a series of meetings “with high-ranking Chinese officials in China’s Ministry of Foreign Affairs, leaders of China’s military AI research organizations, government think tank experts, and corporate executives at Chinese AI companies.” 

“Though many current generation drones are primarily remotely operated, Chinese officials generally expect drones and military robotics to feature ever more extensive AI and autonomous capabilities in the future,” writes Allen. “Chinese weapons manufacturers already are selling armed drones with significant amounts of combat autonomy.

The specific scenario described to me [by one anonymous Chinese official] is unintentional escalation related to the use of a drone,” said Allen in a Wednesday report by The Verge

As Allen explains, the operation of drones both large and small has become increasingly automated in recent years. In the US, drones are capable of basic autopilot, performing simple tasks like flying in a circle around a target. But China is being “more aggressive about introducing greater levels of autonomy closer to lethal use of force,” he says. One example is the Blowfish A2 drone, which China exports internationally and which, says Allen, is advertised as being capable of “full autonomy all the way up to targeted strikes.” –The Verge

 …click on the above link to read the rest of the article…

Smear Slander Rinse and Repeat

Frans Masereel Montmartre 1925

The way ‘news’ is reported through known outlets changes so fast hardly a soul notices that news as we once knew it no longer exists. This is due to a large extent to the advent of the internet in general, and social media in particular. On the one hand this has led to an absolute overkill in ‘news’, forcing people to pick between sources once they find they can’t read or view it all, on the other hand it has allowed news outlets to flood the former news waves with so much of the same that nobody can compare one source with the other anymore.

Once you achieve that situation, you’re more or less free to make the news, rather than just report on it. The rise of Donald Trump has made the existing mass media realize that one-sided negative reporting on the man sells better than anything objective can. The MSM have sort of won the battle versus the interwebs, albeit only in that regard, and only for this moment, but that is enough for them for now; just like their readers, they don’t have the scope or the energy to look any further or deeper.

This is in a nutshell, and we really should take a much more profound look but that’s another chapter, what has changed the news, and what will keep on changing it until the truth sets us all free. This is what drives outlets like CNN, the New York Times and the Guardian today, because it provides them with readers and viewers. Which they would not have if they didn’t conduct a 24/7 war on a set list of topics they know their audience can’t get enough of.

…click on the above link to read the rest of the article…

Minority Report Comes To Life: UK Police Will Use AI To Prevent Crime

With increasing availability of information and new technologies, West Midlands Police in the metropolitan county of West Midlands in England, has taken a page from the 2002 American neo-noir science fiction film, Minority Report, and will soon deploy artificial intelligence to stop crime before it happens, New Scientist reveals. 

The “pre-crime” software, called the National Data Analytics Solution (NDAS), uses a blend of AI, citywide smart cameras, and statistics to try to evaluate the risk of someone committing and or becoming a victim of a violent crime.

West Midlands Police has taken the lead on the project and will finish the prototype system by the end of 1Q 2019. Eight other police forces across the country are involved in the development, including London’s Metropolitan Police and Greater Manchester Police. NDAS will be piloted in the Midlands district before a nationwide rollout. One of the main reasons behind predictive policing – is a cost-savings tool for law enforcement agencies that have been dealing with funding issues, said Iain Donnelly, the police lead on the project.

Donnelly insists NDAS algorithms will sniff out already known criminals, and divert them with “therapeutic interventions,” such as “support from local health or social workers” to avert a crime.

West Midlands Police used data and statistics from past criminal events to identify 1,400 potential indicators for crime, including 30 important ones. Machine learning algorithms then took the data points and learned how to detect crime while analyzing video from smart cameras.

Predictive policing is based on prior criminal stats, including stops, arrests, and convictions, and it is incapable of expanding the pool of suspects beyond the database.

Predictably, the Alan Turing Institute found “serious ethical issues” with the NDAS, warning the program could have good intentions but “inaccurate prediction” is an ongoing concern. 

…click on the above link to read the rest of the article…

The Dystopian Future of Facebook

The Dystopian Future of Facebook

Photo Source thierry ehrmann | CC BY 2.0

This year Facebook filed two very interesting patents in the US. One was a patent for emotion recognition technology; which recognises human emotions through facial expressions and so can therefore assess what mood we are in at any given time-happy or anxious for example. This can be done either by a webcam or through a phone cam. The technology is relatively straight forward. Artificially intelligent driven algorithms analyses and then deciphers facial expressions, it then matches the duration and intensity of the expression with a corresponding emotion. Take contempt for example. Measured by a range of values from 0 to 100, an expression of contempt could be measured by a smirking smile, a furrowed brow and a wrinkled nose. An emotion can then be extrapolated from the data linking it to your dominant personality traits: openness, introverted, neurotic, say.

The accuracy of the match may not be perfect, its always good to be sceptical about what is being claimed, but as AI (Artificial Intelligence) learns exponentially and the technology gets much better; it is already much, much quicker than human intelligence.

Recently at Columbia University a competition was set up between human lawyers and their AI counterparts. Both read a series of non-disclosure agreements with loopholes in them. AI found 95% compared to 88% by humans. The human lawyers took 90 minutes to read them; AI took 22 seconds. More incredibly still, last year Google’s AlphaZero beat Stockfish 8 in chess. Stockfish 8 is an open-sourced chess engine with access to centuries of human chess experience. Yet AlphaZero taught itself using machine learning principles, free of human instruction, beating Stockfish 8 28 times and drawing 72 out of 100. It took AlphaZero four hours to independently teach itself chess. Four hours from blank slate to genius.

…click on the above link to read the rest of the article…

Has The Era Of Autonomous Warfare Finally Arrived?

Has The Era Of Autonomous Warfare Finally Arrived?

The global arms race for the latest weapons of war is a naturally escalating cycle of countries pursuing ways to dominate the battlefield of the future. Increasingly, that battlefield is a matrix of soldiers with traditional weapons, robots, drones and cyberweapons. Until this point, command over this matrix has ultimately been in the hands of humans. Now, however, many of the trends in artificial intelligence-driven autonomy are enabling data collection, analysis and potentially combat to be done by algorithms.

Another key signpost has entered the roadmap toward a future of autonomous systems capable of engaging in combat without human oversight. The U.S. military announced the first ever successful unmanned aerial “kill” of another aircraft during a previously unreported training exercise.

The successful test late last year showed the U.S. Air Force that an unmanned vehicle like the MQ-9 has the ability to conduct air-to-air combat, much like its manned fighter brethren such as an F-15 Eagle or F-22 Raptor, according to Col. Julian Cheater, commander of the 432nd Wing at Creech Air Force Base, Nevada.

“Something that’s unclassified but not well known, we recently in November … launched an air-to-air missile against a maneuvering target that scored a direct hit,” Cheater said. Military.com sat down with Cheater here at the Air Force Association Air, Space and Cyber conference outside Washington, D.C.

“It was an MQ-9 versus a drone with a heat-seeking air-to-air missile, and it was direct hit … during a test,” he said of the first-of-its-kind kill.

(Source: Military.com)

An Air Force Special Operations Command MQ-9 Reaper taxis. (U.S. Air Force photo/Dennis Henry)

The fact that the military has this capability should not be shocking, as it has been well documented on this website and others that the largest defense contractors in the world have developed a clear intention to create fully autonomous weapons systems.

…click on the above link to read the rest of the article…

Facebook Plans “War Room” And A.I. Software To Prevent Election Meddling Ahead Of Midterms

Apparently Facebook thinks it’s the US military, or a NATO command center, or perhaps its millennial employees want to relive the 1980’s cult classic “WarGames” movie.

Facebook announced Wednesday that it plans to set up a “war room” at its Silicon Valley campus to prevent potential foreign election meddling during the midterms.

“We are setting up a war room in Menlo Park for the Brazil and US elections,”Facebook elections and civic engagement director Samidh Chakrabarti said, according to the AFP. He added, “It is going to serve as a command center so we can make real-time decisions as needed.”

A “command center” in a “war room” to make “real-time” decisions huh?… And oh Facebook says it will gain help from artificial intelligence software to prevent fake posts by those pesky Russians to boot…

Apparently office space for the planned anti-election meddling HQ has already been set aside as Facebook says it’s still in the procession of “building” this war room.

But the AFP reports this incredibly amusing detail:

He declined to say when the “war room” — currently a conference room with a paper sign taped to the door — would be in operation.

Teams at Facebook have been honing responses to potential scenarios such as floods of bogus news or campaigns to trick people into falsely thinking they can cast ballots by text message, according to executives.

Don’t worry America your elections are safe with Facebook on watch at the forward operating post! Though likely a conference room with paper taped to the door won’t exactly convey confidence and readiness to the American public.

…click on the above link to read the rest of the article…

Will Bilderberg still be relevant as the future of war is transformed?

This year’s summit is all about war – but what they all want to conquer is artificial intelligence

A protester’s sign saying ‘Stop The New World Order’ near the venue of the 2016 Bilderberg conference in Dresden, Germany.
A protester’s sign saying ‘Stop The New World Order’ near the venue of the 2016 Bilderberg conference in Dresden, Germany. Photograph: Chad Buchanan/Getty Images

This year’s Bilderberg summit is a council of war. On the agenda: Russia and Iran. In the conference room: the secretary general of Nato, the German defence minister, and the director of the French foreign intelligence service, DGSE.

They are joined in Turin, Italy, by a slew of academic strategists and military theorists, but for those countries in geopolitical hotspots there is nothing theoretical about these talks. Not when the prime ministers of Estonia and Serbia are discussing Russia, or Turkey’s deputy PM is talking about Iran.

The clearest indication that some sort of US-led conflict is on the cards is the presence of the Pentagon’s top war-gamer, James H Baker. He is an expert in military trends, and no trend is more trendy in the world of battle strategy than artificial intelligence. Bilderberg is devoting a whole session to AI this year – and has invited military theorist Michael C Horowitz, who has written extensively on its likely impact on the future of war.

Q&A

Does Russia present a credible threat to the UK?

Russia’s president, Vladimir Putin, left, shakes hands with China’s president, Xi Jinping
Pinterest
Russia’s president, Vladimir Putin, left, greets China’s president, Xi Jinping. China and Russia are investing heavily in artificial intelligence. Photograph: Sergey Guneyev/AFP/Getty Images

Horowitz sees AI as “the ultimate enabler”. In an article published just a few weeks ago in the Texas National Security Review, he quotes Putin’s remark from 2017: “Artificial intelligence is the future, not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”

…click on the above link to read the rest of the article…

New DARPA Program Plans To Patrol Cities With AI Drones

On May 10, the Defense Advanced Research Projects Agency (DARPA) unveiled the Urban Reconnaissance through Supervised Autonomy (URSA) program, which addresses the issues of reconnaissance, surveillance, and target acquisition within urban environments.

The primary objective of the URSA program is to evaluate the feasibility and effectiveness of blending unmanned aerial systems, sensor technologies, and advanced machine learning algorithms to “enable improved techniques for rapidly discriminating hostile intent and filtering out threats in complex urban environments,” said FedBizOpps.

In other words, the Pentagon is developing a program of high-tech cameras mounted on drones and other robots that monitor cities, which enable identification and discrimination between civilians and terrorists through machine learning computers.

DARPA provides a simple scenario of what a URSA engagement would look like: 

“A static sensor located near an overseas military installation detects an individual moving across an urban intersection and towards the installation outside of normal pedestrian pathways. An unmanned aerial system (UAS) equipped with a loudspeaker delivers a warning message. The person is then observed running into a neighboring building. Later, URSA detects an individual emerging from a different door at the opposite end of the building, but confirms it is the same person and sends a different UAS to investigate.

This second UAS determines that the individual has resumed movement toward a restricted area. It releases a nonlethal flash-bang device at a safe distance to ensure the individual attends to the second message and delivers a sterner warning. This second UAS takes video of the subject and determines that the person’s gait and direction are unchanged even when a third UAS flies directly in front of the person and illuminates him with an eye-safe laser dot. URSA then alerts the human supervisor and provides a summary of these observations, warning actions, and the person’s responses and current location.”

…click on the above link to read the rest of the article…

 

Facebook: Six Degrees of Giant Squid


Hildegard von Bingen (1098-1179) German artist, philosopher, composer, mystic Cosmic Tree
All of a sudden, politicians in the EU, UK, and USA all want to talk to Mark Zuckerberg. That’s a bad enough sign all by itself. It means they all either have been asleep, complicit, or they’re not very bright. The media tries to convince us the Facebook ‘scandal’ is about Trump, Russia (yawn..) and elections. It’s not. Not even close.

If Zuckerberg ever shows up for any of these meetings with ‘worried’ politicians, he’ll come with a cabal of lawyers in tow, and they’ll put the blame on anyone but Facebook and say the company was tricked by devious parties who didn’t live up to their legal agreements.

After that, the argument won’t be whether Facebook broke any laws for allowing data breaches, but whether their data use policy itself is, and always was, illegal. Now, Facebook has been around for a few years, with their policies, and nobody ever raised their voices. Not really, that is.

And then it’ll all fizzle out, amid some additional grandstanding from all involved, face-saving galore, and more blame for Trump and Russia.

The new European Parliament chief Antonio Tajani said yesterday: “We’ve invited Mark Zuckerberg to the European Parliament. Facebook needs to clarify before the representatives of 500 million Europeans that personal data is not being used to manipulate democracy.”

That’s all you need to know, really. Personal data can be used to manipulate anything as long as it’s not democracy. Or at least democracy as the Brussels elite choose to define it.

First: this is not about Cambridge Analytica, it’s about Facebook. Or rather, it’s about the entire social media and search industry, as well as its connections to the intelligence community. Don’t ever again see Google or Facebook as not being part of that.

What Facebook enabled Cambridge Analytica to do, it will do ten times bigger itself. And it sells licences to do it to probably thousands of other ‘developers’. The CIA and NSA may have unlimited powers, but prior to Alphabet and Facebook, they never had the databases. They do now, and they’re using them. ‘Manipulate democracy’? What democracy?

…click on the above link to read the rest of the article…

 

Olduvai IV: Courage
In progress...

Olduvai II: Exodus
Click on image to purchase