YouTube star Casey Neistat criticises video site’s leaders

Casey NeistatImage copyright

Image caption

Mr Neistat suggests YouTube’s community of creators acts as a defence against online competitors

One of YouTube’s most influential vloggers has chastised the service’s leaders, claiming they are failing many of their most popular video creators.

Specifically, Casey Neistat criticised the way the platform had made it impossible for some videos to generate advertising revenue, without clearly explaining the rules to its community.

One of his own videos – an interview with Indonesia’s president – was temporarily “demonetised” last week.

YouTube has said it is listening.

“We watched Casey’s video and appreciate him and the wider community voicing their concerns,” a spokeswoman told the BBC.

“We know this has been a difficult few months, and we’re working hard to improve our systems. We’re making progress, but we know there is a lot more to do.”

‘Existential threat’

Mr Neistat has more than eight million subscribers on YouTube, who have signed up to be alerted when he posts. He has also struck a multi-million-dollar deal to create content for CNN on the platform.

He is normally viewed as being one of the leading champions of the site.

But in a video posted on Tuesday, he said he felt compelled to speak out because the level of upset among creators posed an “existential threat to YouTube’s entire business”.

Image copyright

Image caption

Mr Neistat’s vlog from Indonesia was demonetised until he appealed against the decision

The Google division began stripping some videos of adverts earlier in the year after several major brands suspended YouTube campaigns because their marketing clips had been attached to extremist content.

To address the problem, YouTube introduced an algorithm that determines which clips are “family friendly” and thus allowed to continue making money for their creators.

But Mr Neistat said the decision-making process had been badly communicated.

“There are no answers anywhere, and there’s no-one telling you what’s going on,” he said.

“The thing that was most troubling for me… was the lack of communication, the lack of transparency on the part of YouTube.”

“People are… putting the same amount of work, the same amount of energy and the same amount of expense into the content they’re creating, but now they’re getting paid only a fraction of what they did.”

A recent decision to demonetise creators’ videos about the Las Vegas shootings had caused particular ire, Mr Neistat said, since a video featuring the chat-show host Jimmy Kimmel discussing the same incident had been allowed to continue featuring ads.

“It sort of reeks of hypocrisy, and again the community felt like a second-class citizen,” he said.

As a rule, YouTube prevents adverts from running on videos about tragedies.

But this does not apply to clips posted by select partners – including Mr Kimmel’s employer, ABC – who are allowed to sell ads themselves rather than relying on Google to do so.

Image copyright
Getty Images

Image caption

A recent clip of Jimmy Kimmel discussing a mass shooting in Las Vegas was allowed to show adverts

“In the specific case of tragedies, like the one in Las Vegas, we are working to not allow such partners to sell against such content,” a YouTube spokeswoman said last week.

“We have not completed this work yet, but will soon.”

Mr Neistat suggested a better alternative would be to give creators more control over whose adverts appeared alongside their clips.

The video-maker is far from being the first YouTuber to complain about the issue. But one industry-watcher said his intervention carried weight.

“People look to Casey to be not just an inspiration but also a voice for the community – he’s very well respected and people do listen to what he says and follow his lead,” said Alex Brinnand, editor of TenEighty magazine.

“The fact that he has put out this video… will help ensure his audience is aware of the issue and becomes as equally unhappy as he is.”

Media captionWATCH: Other YouTubers told the BBC about their frustrations last month

Switch to Twitch

Mr Neistat highlighted that Twitter’s rival video-based social network, Vine, had collapsed after its managers had disappointed several of its leading clip creators and suggested YouTube could face a similar exodus.

“When you think about Netflix or Amazon or Hulu or any of these other digital distribution platforms right now, they’ve all got money, they’re all willing to spend money, and they’re trying to figure out how to diversify their audience,” he said.

He added that Amazon’s Twitch service – which currently focuses on video-games-related live feeds – had already tempted some.

Twitch began allowing users to upload pre-recorded videos a year ago and may unveil new features at its annual TwitchCon event, which begins on Friday.

Image copyright

Image caption

Amazon paid $970m (£736m) to buy Twitch in 2014

However, Mr Brinnand questioned whether the service had done enough to lure away YouTube’s biggest names yet.

“For creators like Casey, I don’t think at the moment that Twitch is a viable option,” he said.

“It’s a lot more geared to live or as-live content, so doesn’t cater to the same audience the vloggers have with their more packaged, produced videos.

“But Twitch has laid the foundations for the future – it already offers very appealing revenue streams – and could be a contender if it develops a stronger platform for standard video.”

Article source:

Still photographs spring to life

New software makes it possible to breathe life into still photo portraits.

The project was developed in Israel with the help of a leading social network.

It has the potential to become the net’s next viral hit, but also has more serious long-term uses, as one of its creators explains.

Article source:

Native American tribe sues Amazon and Microsoft

Patent licenseImage copyright
Getty Images

Image caption

Some believe the patent holder is using a loophole to avoid scrutiny

A Native American tribe is suing Amazon and Microsoft for infringing data-processing patents it is holding.

The patents were assigned to the Saint Regis Mohawk Tribe by technology company SRC Labs, and it will receive a share of any award.

Tribal sovereignty means that the patents cannot be reviewed by the Patent Trial and Appeal Board.

A similar deal has drawn criticism from US lawmakers, who claim it is a loophole to avoid patent scrutiny.

Democratic US senator Claire McCaskill drafted a bill this month, in response to another attempt to transfer patents to the same tribe.

In that case, it was pharmaceutical giant Allergan, and a patent for dry-eye medication.

Ms McCaskill said at the time. “Congress never imagined tribes would allow themselves to be used by pharmaceutical companies to avoid challenges to patents, and this bill will shut the practice down before others follow suit.”

The tribe issued a statement questioning why the legislation targeted Native American tribes but not other sovereign governments or state universities.

John Tothill, a partner at law firm Dehns, said the US appeal board was frequently used to revoke patents.

“Microsoft or Amazon could pre-empt this action by applying to have the patents revoked,” he said.

“It is a way of playing the system, if you like, and trying to block potential litigation.

“I am assuming that passing the patents on like this stops that because the US federal government does not have rights over the sovereignty of Native Americans.”

Neither Amazon nor Microsoft have responded to requests for comment.

Article source:

Fukushima disaster: The robots going where no human can

Robots have become central to the cleaning-up operation at Japan’s Fukushima nuclear power plant, six years after the tsunami that triggered the nuclear meltdown.

It is estimated that around 600 tonnes of toxic fuel may have leaked out of the reactor during the incident.

The Tokyo Electric Power Company is using a variety of robots to explore areas too dangerous for people to go near.

BBC Click was given rare access to the site to see how the decontamination work was progressing.

See more at Click’s website and @BBCClick.

Article source:

Google removes cupcake calorie counter from Maps

CupcakesImage copyright
Getty Images

Image caption

Some users did not want to be told how many cakes they had walked

Google has decided to remove an update to Maps that shows users how many calories they would burn if they walked to their destination.

It follows what the search giant described as “strong user feedback” with many criticising the feature as patronising, shaming and a possible trigger for eating disorders.

The pink cupcake calorie counter was also lambasted as being unscientific.

It will be removed by the end of the day, Google has confirmed.

The experimental feature was rolled out on the iOS version of Google Maps, beneath walking directions.

It told people how many calories they would burn if they walked and what that was in terms of cupcakes.

Image Copyright @sandhya__k

Twitter post by @sandhya__k: Just noticed that Google Maps tells you the number of mini cupcakes you'll burn on your walk. Not sure I want this info,esp after a workout Image Copyright @sandhya__k

Twitter user Taylor Lorenz summed up the attitude of many when she tweeted that the feature could be “extremely triggering” for someone with eating disorders.

She also criticised it because it could not be turned off, and for being “wildly inaccurate” because it failed to take into account general health information.

Skip Twitter post by @TaylorLorenz

End of Twitter post by @TaylorLorenz

Skip Twitter post 2 by @TaylorLorenz

End of Twitter post 2 by @TaylorLorenz

Priya Tew, a member of the Association of UK Dietitians said: “Although it is good to encourage people to walk more, having the calories used on Google Maps does not seem to be the best way to do this.

“Firstly it encourages competition, trying to burn more calories each day which could be triggering for some people who have a tendency to over-exercise. Secondly it could make people feel shamed that they have not walked far enough or burned enough calories.

“If people want to count their calories then they should be given the option to do this, rather than it being enforced.”

Article source:

Child safety smartwatches ‘easy’ to hack, watchdog says

Children's smartwatchesImage copyright

Some smartwatches designed for children have security flaws that make them vulnerable to hackers, a watchdog has warned.

The Norwegian Consumer Council (NCC) tested watches from brands including Gator and GPS for Kids.

It said it discovered that attackers could track, eavesdrop or even communicate with the wearers.

The manufacturers involved insist the problems have either already been resolved or are being addressed.

UK retailer John Lewis has withdrawn one of the named smartwatch models from sale in response.

The smartwatches tested essentially serve as basic smartphones, allowing parents to communicate with their children as well as track their location.

Some include an SOS feature that allows the child to instantly call their parents.

They typically sell for about £100.

The NCC said it was concerned that Gator and GPS for Kids’ watches transmitted and stored data without encryption.

It said that meant strangers, using basic hacking techniques, could track children as they moved, or make a child appear to be in a completely different location.

Image copyright

Image caption

The NCC plans a social media campaign to publicise its findings

Consumer rights watchdog Which? criticised the “shoddy” watches and said parents “would be shocked” if they knew the risks.

Spokesman Alex Neill said: “Safety and security should be the absolute priority. If that can’t be guaranteed, then the products should not be sold.”

John Lewis stocks a version of the Gator watch, although it is not clear whether it suffers from the same security flaws as the watches tested.

The firm said it was withdrawing the product from sale “as a precautionary measure” while awaiting “further advice and reassurance from the supplier”.

GPS for Kids said it had resolved the security flaws for new watches and that existing customers were being offered an upgrade.

The UK distributor of the Gator watch said it had moved its data to a new encrypted server and was developing a new, more secure app for customers.

Article source:

Huawei Mate 10 uses AI to distinguish cats from dogs

Huawei Mate 10

Image caption

The phone takes account of the fact cats’ eyes are more reflective than dogs’

Huawei says it has given its latest smartphones advanced object-recognition capabilities to help them take better photos than the competition.

The Chinese company says the artificial intelligence-based technology can even distinguish between cats and dogs in a split-second, allowing it to automatically tweak how their fur and eyes appear.

It says this is possible because of a new type of chip in the Mate 10 phones.

But experts question the tech’s appeal.

Huawei is currently the world’s third best-selling smartphone maker, according to research company IDC, with a market share of 11.3% in the April-to-June quarter.

That put it slightly behind Apple, which had a 12% share.

The Shenzhen-based company has previously said it aims to overtake its US rival before the end of 2019, and then eventually leapfrog the market leader, Samsung.

Artificial brain

Huawei says it trained the Mate 10′s camera-controlling algorithms with more than 100 million pictures to teach them to recognise different scenarios and items.

To ensure the decisions are taken quickly enough, the company said, it had developed its own processor – the Kirin 970 – which has a neural processing unit (NPU) in addition to the standard central processing unit (CPU) and graphics processing unit (GPU) used to power most computers.

Image copyright

Image caption

Huawei says the Kirin chip benefits from machine-learning work that involved more than 100 million images

The architecture of the NPU is a specialised part of the chip designed to handle matrix multiplications at speed – a type of calculation used by artificial intelligence neural networks, which attempt to mimic the way the brain works.

Huawei says the inclusion of an NPU in its chip allows it to recognise about 2,000 images per minute.

That is about double the rate that the new A11 processor in the iPhone 8 would be able to handle, Huawei says.

The company is not alone in designing a part of its processor to handle AI-enhanced tasks.

Apple has introduced what it calls a “bionic neural engine” in the A11, which will be used for facial recognition tasks by the forthcoming iPhone X.

And Google has developed what it terms a “tensor processing unit”, which it uses in its data centres to support its Search, Street View, Photos and Translate services.

One expert suggested Huawei’s move was significant but difficult to market to consumers.

Image copyright

Image caption

The Mate 10 Pro has a 6in (15cm) OLED screen – there is also a slightly smaller 5.9in LED version

“All of this could have been done on a GPU,” said Ian Cutress, from the Anandtech engineering news site.

“But having the NPU makes the processes faster while potentially using less power.

“The thing is that it’s very difficult to explain all this to potential customers as it gets very technical very quickly.

“For now, the use cases are limited and probably not going to be the sole reason to buy the device.”

Fur analysis

Many smartphone cameras make automatic tweaks to the images they take, but Huawei suggests its technology takes this to the next level.

Image caption

The camera takes account of the fact cats tend to have longer hair than dogs

In the example of cats and dogs, it says:

  • because cats’ eyes are more reflective than dogs’, in bright interior light and sunlight the camera adjusts down the ISO level when a close-up of the animal is being taken
  • to take account of differences in the type of hair or fur the pets have, the software alters the image sharpness via the amount of noise reduction it applies
  • since the camera has been trained to expect cats to be smaller than dogs, it also makes an adjustment to the depth of field

“Without doing lots of tests, it’s difficult to tell how much value this really adds to the camera capabilities,” said Ben Wood, from the technology consultancy CCS Insight.

“The problem with any of these techniques is that whether they are of benefit or not is in the eye of the beholder – it’s very subjective.”

Image copyright

Image caption

If food is detected, the camera system adds saturation and contrast to the image

Huawei also says the NPU is used to optimise tasks carried out by Microsoft’s pre-loaded Translator software, which converts words and images of text between dozens of languages.

According to the Chinese company, the software runs about three times faster than it would do otherwise.

The company is now inviting third-party developers to build other apps to take advantage of the NPU.

But Mr Wood said that he had concerns that Huawei was putting too much emphasis on the technology.

Image caption

This unedited photo of a cat was taken with the Mate 10 Pro

“I don’t believe most consumers understand what AI is,” said Mr Wood.

“So, if Huawei intends to market the new phones around the technology, it will have to clearly articulate what the benefits are beyond it just being the buzzword of the moment.”

Article source:

Wi-fi security flaw ‘puts devices at risk of hacks’

Media captionWATCH: Wi-fi security flaw explained

The wi-fi connections of businesses and homes around the world are at risk, according to researchers who have revealed a major flaw dubbed Krack.

It concerns an authentication system which is widely used to secure wireless connections.

Experts said it could leave “the majority” of connections at risk until they are patched.

The researchers added the attack method was “exceptionally devastating” for Android 6.0 or above and Linux.

A Google spokesperson said: “We’re aware of the issue, and we will be patching any affected devices in the coming weeks.”

The US Computer Emergency Readiness Team (Cert) has issued a warning on the flaw.

“US-Cert has become aware of several key management vulnerabilities in the four-way handshake of wi-fi protected access II (WPA2) security protocol,” it said.

“Most or all correct implementations of the standard will be affected.”

Image copyright
Getty Images

Image caption

Most wi-fi devices could be at risk

Computer security expert from the University of Surrey Prof Alan Woodward said: “This is a flaw in the standard, so potentially there is a high risk to every single wi-fi connection out there, corporate and domestic.

“The risk will depend on a number of factors including the time it takes to launch an attack and whether you need to be connected to the network to launch one, but the paper suggests that an attack is relatively easy to launch.

“It will leave the majority of wi-fi connections at risk until vendors of routers can issue patches.”

Industry body the Wi-Fi Alliance said that it was working with providers to issue software updates to patch the flaw.

“This issue can be resolved through straightforward software updates and the wi-fi industry, including major platform providers, has already started deploying patches to wi-fi users.

“Users can expect all their wi-fi devices, whether patched or unpatched, to continue working well together.”

It added that there was “no evidence” that the vulnerability had been exploited maliciously.

Tech giant Microsoft said that it had already released a security update.

Security handshake

The vulnerability was discovered by researchers led by Mathy Vanhoef, from Belgian university, KU Leuven.

According to his paper, the issue centres around a system of random number generation known as nonce (a number that can only be used once), which can in fact be reused to allow an attacker to enter a network and snoop on the data being sent in it.

“All protected wi-fi networks use the four-way handshake to generate a fresh session key and so far this 14-year-old handshake has remained free from attacks, he writes in the paper describing Krack (key reinstallation attacks).

“Every wi-fi device is vulnerable to some variants of our attacks. Our attack is exceptionally devastating against Android 6.0: it forces the client into using a predictable all-zero encryption key.”

Dr Steven Murdoch from University College, London said there were two mitigating factors to what he agreed was a “huge vulnerability”.

“The attacker has to be physically nearby and if there is encryption on the web browser, it is harder to exploit.”

More details can be found at this website.

Krack explained

Prof Alan Woodward explained the issue to the BBC.

When any device uses wi-fi to connect to, say, a router it does what is known as a “handshake”: it goes through a four-step dialogue, whereby the two devices agree a key to use to secure the data being passed (a “session key”).

This attack begins by tricking a victim into reinstalling the live key by replaying a modified version of the original handshake. In doing this a number of important set-up values can be reset which can, for example, render certain elements of the encryption much weaker.

This attacks appears to work on all wi-fis tested – prior to the patches currently being issued.

In some it is possible to decrypt and inject data, enabling an attacker to hijack a connection. In others it is even worse as it is possible to forge a connection, which, as the researchers note, is “catastrophic”.

Not all routers will be affected but the people this could be most problematic for are the internet service providers who have millions of routers in customers’ homes. How will they make sure all of them are secure?

Article source:

UK TV drama about North Korea hit by cyber-attack

Kim Jong-unImage copyright
Getty Images

Image caption

Kim Jong-un’s officials described Opposite Number as being “slanderous”

North Korean hackers targeted a British television company making a drama about the country, it has emerged.

The series – due to be written by an Oscar-nominated screenwriter – has been shelved.

In August 2014, Channel 4 announced what it said would be a new “bold and provocative” drama series.

Entitled Opposite Number, the programme’s plot involved a British nuclear scientist taken prisoner in North Korea.

The production firm involved – Mammoth Screen – subsequently had its computers attacked.

The project has not moved forward – because of a failure to secure funding, the company says.

‘Hair on fire’

North Korean officials had responded in anger when details of the TV series were fist revealed. Pyongyang described the plot as a “slanderous farce” as it called on the British government to pull the series in order to avoid damaging relations.

The North Koreans did more than protest though – they hacked into the computer networks of the company behind the show.

The incident was first reported by the New York Times, which cited Channel 4 as the main target. The BBC understands though that it was actually Mammoth Screen which was hit by hackers.

Image copyright
Getty Images

Image caption

Opposite Number’s screenwriter Matt Charman was nominated for an Oscar for the 2015 Spielberg movie Bridge of Spies

The attack did not inflict any damage but the presence of North Korean hackers on the system caused widespread alarm over what they might do.

“They were running around with their hair on fire,” a TV executive from another company told the BBC, describing the level of concern.

British intelligence was also aware of the attack.

The concern was compounded because Sony Pictures experienced a significant cyber-attack in November 2014. A group called the Guardians of Peace claimed it was behind it but US officials said they believed North Korea was responsible.

That attack was also in retaliation for a drama – in this case the planned release of the film The Interview, a comedy in which the North Korean leader was assassinated.

The studio had its emails stolen and publicly released but also had a significant portion of its computer network destroyed by the attackers. The film was eventually released online amid concerns that cinemas would not show it because of threats.

Image copyright
Getty Images

Image caption

Sony pulled The Interview from US cinemas after it was hacked

It also led to a strong reaction from the Obama White House, including the imposition of sanctions. There was no commensurate complaint from the British government, despite officials knowing that a UK company had also been targeted – although not affected in the same way as Sony Pictures.

Increased aggression

In the UK, Opposite Number has been shelved. The drama was due to be the second commission to come out of Channel 4′s newly formed international drama division.

At the time, Mammoth Screen and its distribution partner, ITV Studios Global Entertainment, said they were seeking an international partner. But a spokeswoman for ITV Studios – which purchased Mammoth Screen in 2015 – told the BBC in February that “that the co-production hasn’t progressed because third-party funding has not been secured”.

Those involved will not comment on whether the failure to attract funding and move forward with the production was in any way linked to the cyber-attack.

Image copyright
Mammoth Screen

Image caption

Mammoth Screen went on to make the ITV/PBS series Victoria

The cyber-threats from North Korea have not stopped. Its hackers have proved increasingly aggressive and adept, targeting banks to steal money and media in South Korea.

British officials also believe North Korea was behind the Wannacry ransomware which struck around the world in May with significant parts of the NHS affected, although there has been no official response from the UK government to this incident.

But the revelations about an attack on a TV production company may raise further concerns about what North Korea is capable of and how companies in the UK – and the British government – react when it happens.

British officials also believe North Korea was behind the Wannacry ransomware which struck around the world in May with significant parts of the NHS affected, although there has been no official response from the UK government to this incident.

But the revelations about an attack on a TV production company may raise further concerns about what North Korea is capable of and how companies in the UK – and the British government – react when it happens.

Article source:

Can we teach robots ethics?

Robot faceImage copyright
Getty Images

We are not used to the idea of machines making ethical decisions, but the day when they will routinely do this – by themselves – is fast approaching. So how, asks the BBC’s David Edmonds, will we teach them to do the right thing?

The car arrives at your home bang on schedule at 8am to take you to work. You climb into the back seat and remove your electronic reading device from your briefcase to scan the news. There has never been trouble on the journey before: there’s usually little congestion. But today something unusual and terrible occurs: two children, wrestling playfully on a grassy bank, roll on to the road in front of you. There’s no time to brake. But if the car skidded to the left it would hit an oncoming motorbike.

Neither outcome is good, but which is least bad?

The year is 2027, and there’s something else you should know. The car has no driver.

Image copyright
Jaguar Land Rover

Image caption

Dr Amy Rimmer believes self-driving cars will save lives and cut down on emissions

I’m in the passenger seat and Dr Amy Rimmer is sitting behind the steering wheel.

Amy pushes a button on a screen, and, without her touching any more controls, the car drives us smoothly down a road, stopping at a traffic light, before signalling, turning a sharp left, navigating a roundabout and pulling gently into a lay-by.

The journey’s nerve-jangling for about five minutes. After that, it already seems humdrum. Amy, a 29-year-old with a Cambridge University PhD, is the lead engineer on the Jaguar Land Rover autonomous car. She is responsible for what the car sensors see, and how the car then responds.

She says that this car, or something similar, will be on our roads in a decade.

Many technical issues still need to be overcome. But one obstacle for the driverless car – which may delay its appearance – is not merely mechanical, or electronic, but moral.

The dilemma prompted by the children who roll in front of the car is a variation on the famous (or notorious) “trolley problem” in philosophy. A train (or tram, or trolley) is hurtling down a track. It’s out of control. The breaks have failed. But disaster lies ahead – five people are tied to the track. If you do nothing, they’ll all be killed. But you can flick the points and redirect the train down a side-track – so saving the five. The bad news is that there’s one man on that side-track and diverting the train will kill him. What should you do?

Image copyright
Princeton University Press

This question has been put to millions of people around the world. Most believe you should divert the train.

But now take another variation of the problem. A runaway train is hurtling towards five people. This time you are standing on a footbridge overlooking the track, next to a man with a very bulky rucksack. The only way to save the five is to push Rucksack Man to his death: the rucksack will block the path of the train. Once again it’s a choice between one life and five, but most people believe that Rucksack Man should not be killed.

Image copyright
Princeton University Press

This puzzle has been around for decades, and still divides philosophers. Utilitarians, who believe that we should act so as to maximise happiness, or well-being, think our intuitions are wrong about Rucksack Man. Rucksack Man should be sacrificed: we should save the five lives.

Trolley-type dilemmas are wildly unrealistic. Nonetheless, in the future there may be a few occasions when the driverless car does have to make a choice – which way to swerve, who to harm, or who to risk harming? These questions raise many more. What kind of ethics should we programme into the car? How should we value the life of the driver compared to bystanders or passengers in other cars? Would you buy a car that was prepared to sacrifice its driver to spare the lives of pedestrians? If so, you’re unusual.

Then there’s the thorny matter of who’s going to make these ethical decisions. Will the government decide how cars make choices? Or the manufacturer? Or will it be you, the consumer? Will you be able to walk into a showroom and select the car’s ethics as you would its colour? “I’d like to purchase a Porsche utilitarian ‘kill-one-to-save-five’ convertible in blue please…”

Find out more

  • Listen to Can We Teach Robots Ethics? on Analysis, on BBC Radio 4, at 20:30 on Monday 16 October – or catch up later on the BBC iPlayer
  • Listen to The Inquiry on the BBC World Service – click here for transmission times or to listen online

Ron Arkin became interested in such questions when he attended a conference on robot ethics in 2004. He listened as one delegate was discussing the best bullet to kill people – fat and slow, or small and fast? Arkin felt he had to make a choice “whether or not to step up and take responsibility for the technology that we’re creating”. Since then, he’s devoted his career to working on the ethics of autonomous weapons.

There have been calls for a ban on autonomous weapons, but Arkin takes the opposite view: if we can create weapons which make it less likely that civilians will be killed, we must do so. “I don’t support war. But if we are foolish enough to continue killing ourselves – over God knows what – I believe the innocent in the battle space need to be better protected,” he says.

Like driverless cars, autonomous weapons are not science fiction. There are already weapons that operate without being fully controlled by humans. Missiles exist which can change course if they are confronted by an enemy counter-attack, for example. Arkin’s approach is sometimes called “top-down”. That is, he thinks we can programme robots with something akin to the Geneva Convention war rules – prohibiting, for example, the deliberate killing of civilians. Even this is a horrendously complex challenge: the robot will have to distinguish between the enemy combatant wielding a knife to kill, and the surgeon holding a knife he’s using to save the injured.

An alternative way to approach these problems involves what is known as “machine learning”.

Susan Anderson is a philosopher, Michael Anderson a computer scientist. As well as being married, they’re professional collaborators. The best way to teach a robot ethics, they believe, is to first programme in certain principles (“avoid suffering”, “promote happiness”), and then have the machine learn from particular scenarios how to apply the principles to new situations.

Image copyright
Getty Images

Image caption

A humanoid robot developed by Aldebaran Robotics interacts with residents at a care home

Take carebots – robots designed to assist the sick and elderly, by bringing food or a book, or by turning on the lights or the TV. The carebot industry is expected to burgeon in the next decade. Like autonomous weapons and driverless cars, carebots will have choices to make. Suppose a carebot is faced with a patient who refuses to take his or her medication. That might be all right for a few hours, and the patient’s autonomy is a value we would want to respect. But there will come a time when help needs to be sought, because the patient’s life may be in danger.

After processing a series of dilemmas by applying its initial principles, the Andersons believe that the robot would become clearer about how it should act. Humans could even learn from it. “I feel it would make more ethically correct decisions than a typical human,” says Susan. Neither Anderson is fazed by the prospect of being cared for by a carebot. “Much rather a robot than the embarrassment of being changed by a human,” says Michael.

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

Image copyright
Getty Images

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code – a way of scrutinising what’s happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what’s the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot’s bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won’t make bad choices because it is angry. The autonomous car won’t get drunk, or tired, it won’t shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year – most by human error. Reducing those numbers is a big prize.

Quite how much we should value consistency is an interesting issue, though. If robot judges provide consistent sentences for convicted criminals, this seems to be a powerful reason to delegate the sentencing role. But would nothing be lost in removing the human contact between judge and accused? Prof John Tasioulas at King’s College London believes there is value in messy human relations. “Do we really want a system of sentencing that mechanically churns out a uniform answer in response to the agonising conflict of values often involved? Something of real significance is lost when we eliminate the personal integrity and responsibility of a human decision-maker,” he argues.

Image copyright
Land Rover Jaguar

Amy Rimmer is excited about the prospect of the driverless car. It’s not just the lives saved. The car will reduce congestion and emissions and will be “one of the few things you will be able to buy that will give you time”. What would it do in our trolley conundrum? Crash into two kids, or veer in front of an oncoming motorbike? Jaguar Land Rover hasn’t yet considered such questions but Amy is not convinced that matters: “I don’t have to answer that question to pass a driving test, and I’m allowed to drive. So why would we dictate that the car has to have an answer to these unlikely scenarios before we’re allow to get the benefits from it?”

That’s an excellent question. If driverless cars save life overall why not allow them on to the road before we resolve what they should do in very rare circumstances? Ultimately, though, we’d better hope that our machines can be ethically programmed – because, like it or not, in the future more and more decisions that are currently taken by humans will be delegated to robots.

There are certainly reasons to worry. We may not fully understand why a robot has made a particular decision. And we need to ensure that the robot does not absorb and compound our prejudices. But there’s also a potential upside. The robot may turn out to be better at some ethical decisions than we are. It may even make us better people.

Illustrations are From Would You Kill The Fat Man? By David Edmonds. Princeton University Press, 2014

Join the conversation – find us on Facebook, Instagram, Snapchat and Twitter.

Article source: