Windows 7 hardest hit by WannaCry worm

Windows logoImage copyright
Getty Images

Image caption

The WannaCry outbreak got started by infecting a small number of vulnerable machines

The majority of machines hit by the WannaCry ransomware worm in the cyber-attack earlier this month were running Windows 7, security firms suggest.

More than 97% of the infections seen by Kaspersky Lab and 66% of those seen by BitSight used the older software.

WannaCry started spreading in mid-May and, so far, has infected more than 200,000 computers around the world.

In the UK, some hospitals had to turn away patients as the worm shut down computer systems.

Many suggested that the reason UK hospitals suffered was because many of them still relied on programmes that required Windows XP – a version of Microsoft’s OS that debuted in 2001.

But infections of XP by WannaCry were “insignificant” said Costin Raiu from Kaspersky Lab.

Windows 7 was first released in 2009 and the most widely infected version was the x64 edition, which is widely used in large organisations, showed figures from Kasperksy.

Many organisations seem to have been caught out because they failed to apply a patch, issued by Microsoft in March. that blocked the vulnerability which WannaCry exploited.

Spanish telecoms firm Telefonica, French carmaker Renault, German rail firm Deutsche Bahn, logistics firm Fedex, Russia’s interior ministry and 61 NHS organisations were all caught out by WannaCry.

After encrypting files, the WannaCry worm demanded a payment of £230 ($300) in bitcoins before they were unfrozen. So far, a reported 296 payments totalling $99,448 (£76,555) have been made to the bitcoin wallets tied to the ransomware.

There have been no reports that anyone who paid has had their data restored by the gang behind the attack.

Security experts also found that the worm spread largely by seeking out vulnerable machines on the net by itself. Before now, many thought it had got started via an email-based phishing campaign.

Adam McNeil, a senior malware analyst at Malwarebytes, said the worm was primed to look for machines vulnerable to a bug in a Microsoft technology known as the Server Message Block (SMB).

“The attackers initiated an operation to hunt down vulnerable public facing SMB ports and, once located, used the newly available SMB exploits to deploy malware and propagate to other vulnerable machines within connected networks,” he wrote.

Mr McNeil said he suspected that whoever was behind the worm first identifed a “few thousand” vulnerable machines which were used as the launch platform for the much larger waves of infection.

Article source:

Leaks ‘expose peculiar Facebook moderation policy’

Facebook logoImage copyright
Getty Images

Image caption

The guidelines Facebook uses to decide what users see are ‘confusing’ say staff

How Facebook censors what its users see has been revealed by internal documents, the Guardian newspaper says.

It said the manuals revealed the criteria used to judge if posts were too violent, sexual, racist, hateful or supported terrorism.

The Guardian said Facebook’s moderators were “overwhelmed” and had only seconds to decide if posts should stay.

The BBC understands the documents seen by the newspaper closely resemble those Facebook currently uses to guide staff.

The leak comes soon after British MPs said social media giants were “failing” to tackle toxic content.

  • Social media giants ‘failing’ over hate
  • Facebook AI ‘will identify terrorists’
  • Social media faces huge hate speech fines

Careful policing

The newspaper said it had managed to get hold of more than 100 manuals used internally at Facebook to educate moderators about what could, and could not, be posted on the site.

The manuals cover a vast array of sensitive subjects, including hate speech, revenge porn, self-harm, suicide, cannibalism and threats of violence.

Facebook moderators interviewed by the newspaper said the policies Facebook used to judge content were “inconsistent” and “peculiar”.

The decision-making process for judging whether content about sexual topics should stay or go were among the most “confusing”, they said.

The Open Rights Group, which campaigns on digital rights issues, said the report started to show how much influence Facebook could wield over its two billion users.

“Facebook’s decisions about what is and isn’t acceptable have huge implications for free speech,” said an ORG statement. “These leaks show that making these decisions is complex and fraught with difficulty.”

It added: “Facebook will probably never get it right but at the very least there should be more transparency about their processes.”

Image copyright

Image caption

Facebook boss Mark Zuckerberg said it was working on better tools to spot toxic content

‘Alarming’ insight

In a statement, Monica Bickert, Facebook’s head of global policy management, said: “We work hard to make Facebook as safe as possible, while enabling free speech.

“This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously,” she added.

As well as human moderators that look over possibly contentious posts, Facebook is also known to use AI-derived algorithms to review images and other information before they are posted. It also encourages users to report pages, profiles and content they feel is abusive.

In early May, the UK parliament’s influential Home Affairs Select Committee strongly criticised Facebook and other social media companies as being “shamefully far” from tackling the spread of hate speech and other illegal and dangerous content.

The government should consider making sites pay to help police content, it said.

Soon after, Facebook revealed it had set out to hire more than 3,000 more people to review content.

British charity the National Society for the Prevention of Cruelty to Children (NSPCC) said the report into how Facebook worked was “alarming to say the least”.

“It needs to do more than hire an extra 3,000 moderators,” said a statement from the organisation.

“Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”

Analysis: Rory Cellan-Jones, BBC Technology Correspondent

It has been clear for a while that dealing with controversial content is just about the most serious challenge that Facebook faces.

These leaked documents show how fine a line its moderators have to tread between keeping offensive and dangerous material off the site – and suppressing free speech.

A Facebook insider told me he thought the documents would show just how seriously and thoughtfully the company took these issues.

Why then does it not publish its training manual for moderators so that the world could see where it draws the line?

There are community guidelines available to read on Facebook but the company fears that if it gives away too much detail on its rules, that will act as a guide to those trying to game the system.

But what will strike many is that they have seen this kind of document before. Most big media organisations will have a set of editorial guidelines, coupled with a style guide, laying out just what should be published and how. Staff know that if they contravene those rules they are in trouble.

Of course, Facebook insists that it is a platform where people come to share content, rather than a media business.

That line is becoming ever harder to maintain, as governments wake up to the fact that the social media giant is more powerful than any newspaper or TV channel in shaping how the public sees the world.

Article source:

Electoral Commission wants powers to tackle election meddling from abroad

Polling stationImage copyright
Getty Images

A probe into the political use of private data has been opened by the information commissioner.

Elizabeth Denham announced the review amid concerns over allegations involving an analytics firm linked to a Brexit campaign.

It follows calls for an investigation into claims that Leave.EU had not declared the role of Cambridge Analytica (CA) in its campaign.

The Electoral Commission says its powers do not extend beyond the UK.

But Ms Denham said: “Having considered the evidence we have already gathered, I have decided to open a formal investigation into the use of data analytics for political purposes.

“This will involve deepening our current activity to explore practices deployed during the UK’s EU referendum campaign, but potentially also in other campaigns.”

The probe was sparked by Labour’s Stephen Kinnock, a remain campaigner, who called on the Electoral Commission to look into links between Leave.EU and CA.

Claire Bassett, the commission’s chief executive, said, while it had “very clear rules” governing the permissibility of donations and printed materials, such as campaign leaflets, it has no power to stop overseas individuals or governments using social media to influence British elections.

“At the moment the rules apply to print media – so if you get a leaflet through your door, that should have an imprint on it which makes it clear who’s produced that leaflet and where it’s come from so you know who’s campaigning for your vote,” she said.

“At the moment those rules don’t extend to social media and we’ve recommended that that should happen.”

High priority

But quizzed about how far the electoral watchdog could go to prevent individuals or governments attempting to influence British elections via data analytic companies which target voters, Ms Bassett said: “If something is happening outside of the borders of this country and is not part of any of the regime we’re responsible for, it’s not something we can cover within our regulation.”

Ms Denham said it was “understandable” that “political campaigns are exploring the potential of advanced data analysis tools to help win votes”, but said the “public have the right to expect that this takes place in accordance with the law”.

“This is a complex and rapidly evolving area of activity and the level of awareness among the public about how data analytics works, and how their personal data is collected, shared and used through such tools is low,” she said.

“What is clear is that these tools have a significant potential impact on individuals’ privacy.

“It is important that there is a greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and the law is upheld.”

‘No involvement’

Ms Denham said the investigation was a “high priority for her team” and that she was “conscious” that it coincides with the general election campaign.

The probe follows an Observer investigation suggesting there were links between data analytics firms, a US billionaire and the Leave campaign in last year’s EU referendum.

A Cambridge Analytica spokesman said the firm was happy to help the watchdog with any inquiry into the use of data analytics in politics but that it had had “no involvement” in the EU referendum.

The Electoral Commission found the Tories spent £1.2m on Facebook campaigns during the 2015 election – more than seven times the £160,000 spent by Labour. The Liberal Democrats spent just over £22,000.

Leave campaigners spent £3.5m with a technology company called Aggregate IQ. Vote Leave said it allowed them to target swing voters online much more effectively and efficiently.

But BBC media editor Amol Rajan said that while huge amounts of money were being spent by political parties online, not everyone was “transparent about their ambitions online”.

“We know that millions and millions of pounds have been spent by various people – foreign forces, sometimes extremists – who are politically advertising online trying to influence elections and they are not regulated,” he said.

“The fact is the technology is changing very fast but the law hasn’t kept pace.

“When it comes to broadcast advertising, we tend to know who’s advertising, how much money they are spending and they tend to do it within certain social norms, but when it comes to political advertising online, it’s very unclear who is spending the money and to what end….

“The point is we simply don’t have clear regulations that require people to be transparent. The implication is that they might be foreign forces; they might be very wealthy individuals who are having a material impact on elections in western or non-Western democracies and we simply don’t know about it.

“It seems pretty obvious if we regulate political advertising in other spheres we need to think very hard about the impact of political advertising online too.”

Article source:

London City first in UK to get remote air traffic control

Media captionRichard Westcott has visited London City Airport’s new control centre, 120 miles from the landing strip

London City is to become the first UK airport to replace its air traffic control tower with a remotely operated digital system.

Instead of sitting in a tower overlooking the runway, controllers will be 120 miles away, watching live footage from high-definition cameras.

The new system, due to be completed in 2018, will be tested for a year before becoming fully operational in 2019.

It has already been tested in Australia, Sweden, Norway and Ireland.

The technology has been developed by Saab, the Swedish defence and security company, and will be introduced as part of a £350m development programme to upgrade London City Airport.

It will also include an extended terminal building, enabling it to serve two million more passengers a year by 2025.

The remote digital system will provide controllers with a 360-degree view of the airfield via 14 high-definition cameras and two cameras which are able to pan, tilt and zoom.

The cameras will send a live feed via fibre cables to a new operations room built at the Hampshire base of Nats, Britain’s air traffic control provider.

As well as being able to see it, controllers will be able to hear the airport, as if they were in situ.

Unlike the old tower, the new system will allow controllers to zoom in for a better view and put radar data onto the screen to track aircraft.

BBC transport correspondent Richard Westcott says a critical new safety feature means the cameras will be able to pick out rogue drones near the airport, as well as light the runway at night.

Image copyright

Image caption

The new system is part of a £350m development programme to upgrade London City Airport

Image copyright

Image caption

The remote digital system is expected to be fully operational in late 2019

Responding to questions about safety and potential system failure, London City Airport said it been independently stress-tested by security specialists.

The system will use three different cables, taking different routes between the airport and the control centre, to ensure there is a back up if one of those cables fails.

Declan Collier, London City Airport chief executive, said he was “absolutely confident” that the system is safe from the threat of a cyber attack.

“No chief executive is complacent about threats from cyber security,” he said.

“But we are very confident that the systems we’re putting in place here are secure, they’re safe, they’re managed very well.”

‘Won over’

Steve Anderson, from Nats Air Traffic Control, told the BBC he has been won over by the technology after being initially “sceptical”.

He said: “They give the controller more information in terms of what they can see, what they can hear.”

The airport is planning to decommission its traditional tower in 2019, replacing it with a new 164ft (50m) digital tower – 104ft (32m) taller than the existing one.

The system made its world debut in Sweden at Ornskoldsvik Airport, where flights have been controlled by a remote tower in Sundsvall, 110 miles (177km) away, since 2015.

Nats airports director Mike Stoller said: “Digital towers are going to transform the way air traffic services are provided at airports by providing real safety, operational and efficiency benefits.

“We do see this as being a growing market place across the UK and the world.”

Get news from the BBC in your inbox, each weekday morning

Article source:

FCC votes to overturn net neutrality rules

Ajit PaiImage copyright

Image caption

FCC chairman Ajit Pai said existing rules hampered the growth of the tech sector

The US Federal Communications Commission has voted to overturn rules that force ISPs to treat all data traffic as equal.

Commissioners at the agency voted two-to-one to end a “net neutrality” order enacted in 2015.

Ajit Pai, head of the FCC, said the rules demanding an open internet harmed jobs and discouraged investment.

Many Americans and technology firms filed objections to the FCC’s proposal prior to the vote.

“This is the right way to go,” said Mr Pai ahead of the vote on Thursday.

In a statement, the FCC said it expected its proposed changes to “substantially benefit consumers and the marketplace”. It added that, before the rules were changed in 2015, they helped to preserve a “flourishing free and open internet for almost 20 years”.

Equal access

The vote by the FCC commissioners is the first stage in the process of dismantling the net neutrality regulations.

The agency is now inviting public comment on whether it should indeed dismantle the rules. Americans have until mid-August to share their views with the FCC.

This call for comments is likely to attract a huge number of responses. Prior to the vote, more than 1 million statements supporting net neutrality were filed on the FCC site.

Image copyright
Getty Images

Image caption

John Oliver urged his viewers to post comments to the FCC, opposing the reversal of net neutrality rules

Many people responded to a call from comedian and commentator John Oliver to make their feelings known.

Separately, some protestors also used software bots to repeatedly file statements on the site.

Many fear that once the equal access rules go, ISPs will start blocking and throttling some data while letting other packets travel on “fast lanes” because firms have paid more to reach customers quicker.

US ISPs such as Comcast, Charter Communications and Altice NV have pledged in public statements to keep data flowing freely.

Despite this public pledge Comcast, along with Verizon and ATT, opposed the original 2015 rule change saying it dented their enthusiasm for improving US broadband.

Facebook, and Google’s parent company Alphabet as well as many other net firms have backed the open net rules saying equal access was important for all.

Article source:

Text-to-switch plan for mobile users

Mobile phoneImage copyright

Mobile phone users will be able to switch operators by sending a text to the provider they want to leave, under plans drawn up by the regulator.

Ofcom said customers could avoid an awkward and long call to their operator and instead send a text. In turn, they will be sent switching codes.

The proposal means Ofcom’s previously preferred option – a more simple one-stage process – is being dropped.

That system was more expensive and could have raised bills, it said.

The change of preferred plan marks a victory for mobile operators who would have faced higher costs under the alternative system. Ofcom said its research suggested customers would also prefer the new planned system.

At present, anyone who wishes to switch to a different mobile provider must contact their current supplier to tell them they are leaving.

Ofcom research suggests that, of those who have switched, some 38% have been hit by one major problem during the process. One in five of them temporarily lost their service, while one in 10 had difficulties contacting their current supplier or keeping their phone number.

Under previous plans, Ofcom wanted responsibility for the switch being placed entirely in the hands of the new provider. That would mean one call to a new provider by the customer.


The regulator has now concluded that such a system would be twice as expensive as its newly-preferred option of texting to switch.

They would text, then receive a text back, which includes a unique code to pass on to their new provider who could arrange the switch within one working day. Customers would be able to follow this process whether they were taking their mobile number with them or not.

Under the proposed rules, mobile providers would be banned from charging for notice periods running after the switch date. That would mean customers would no longer have to pay for their old and new service at the same time after they have switched.

A final decision will be made in the autumn.

Latest figures published last year showed that there were an estimated 47 million mobile phone contracts in the UK, and approximately 5.9 million people had never switched provider at all, nor considered switching in the previous year.

Article source:

General election 2017: Illegal content sanctions threat

Theresa MayImage copyright

Image caption

The Conservative manifesto proposes a so-called ‘Twitter tax’

Online companies could face fines or prosecution if they fail to remove illegal content, under Conservative plans for stricter internet regulation.

The party has also proposed an industry-wide levy, dubbed a “Twitter tax”, to fund “preventative activity to counter internet harms”.

Labour said it had “pressed for tough new codes” in the past but the government had “categorically refused”.

The Liberal Democrats said more needed to be done “to find a real solution”.

Voluntary contributions

The Conservatives said the levy, proposed in their election manifesto, would use the same model as that used in the gambling industry, where companies voluntarily contribute to the charity GambleAware to help pay for education, research, and treating gambling addiction.

Image copyright

Image caption

Labour says it wants to protect young people from online bullying

All social media and communications service providers would be given a set period to come up with plans to fund and promote efforts “to counter internet harms”.

If they failed to do so, the government would have the power to impose an industry-wide toll.

The Conservatives say the exact details, including how long the industry will be given to comply and the size of the levy, will be consulted upon.

A Labour spokesman said: “If the Tories are planning to levy a new tax on social media companies, they need to set out how it will work, who it will affect and what it will raise.

‘Sanctions regime’

“Labour has pushed for a code of practice about the responsibilities of social media companies to protect children and young people from abuse and bullying.”

The Conservatives have also pledged to introduce “a sanctions regime” that would give regulators “the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law”.

Social media platforms and internet service providers would have clearer responsibilities regarding the reporting and removal of harmful material, including bullying, inappropriate or illegal content, and would have to take down material.

“It is certainly bold of the Conservatives to boast that they can protect people on the internet,” Liberal Democrat home affairs spokesman Alistair Carmichael said.

“Government and technology companies must do more to find a real solution to problematic content online.”

And Labour’s digital economy spokeswoman Louise Haigh said: “The Home Office were crystal clear they did not want to legislate and that they believed the voluntary framework was sufficient.

“The fact is that in government the Tories have been too afraid to stand up to the social media giants and keep the public safe from illegal and extremist content.”

Article source:

BBC fools HSBC voice recognition security system

Media captionThe bank’s voice-based ID system was fooled by Dan and his twin

Security software designed to prevent bank fraud has been fooled by a BBC reporter and his twin.

BBC Click reporter Dan Simmons set up an HSBC account and signed up to the bank’s voice ID authentication service.

HSBC says the system is secure because each person’s voice is “unique”.

But the bank let Dan Simmons’ non-identical twin, Joe, access the account via the telephone after he mimicked his brother’s voice.

The bank said it would “review” ways to make the ID system more sensitive following the BBC investigation.

‘Really alarming’

HSBC introduced the voice-based security in 2016, saying it measured 100 different characteristics of the human voice to verify a user’s identity.

Customers simply give their account details and date of birth and then say: “My voice is my password”.

Although the breach did not allow Joe Simmons to withdraw money, he was able to access balances and recent transactions, and was offered the chance to transfer money between accounts.

“What’s really alarming is that the bank allowed me seven attempts to mimic my brother’s voiceprint and get it wrong, before I got in at the eighth time of trying,” he said.

Image caption

HSBC advertises the system in its branches

“Can would-be attackers try as often as they like until they get it right?”

Separately, a Click researcher found HSBC Voice ID kept letting them try to access their account after they deliberately failed on 20 separate occasions spread over 12 minutes.

Click’s successful thwarting of the system is believed to be the first time the voice security measure has been breached.

HSBC declined to comment on how secure the system had been until now.

A spokesman said: “The security and safety of our customers’ accounts is of the utmost importance to us.

“Voice ID is a very secure method of authenticating customers.

“Twins do have a similar voiceprint, but the introduction of this technology has seen a significant reduction in fraud, and has proven to be more secure than PINS, passwords and memorable phrases.”

Account open

“I’m shocked,” said Mike McLaughin, a security expert at Firstbase Technologies.

“This should not be allowed to happen.

“Another person should not be able to access your bank account.

Image copyright

Image caption

Twins are used to check that voice-based ID systems can pick out individuals

“Voices are unique – but if the system allows for too many discrepancies in the voiceprint for a match, then it’s not secure.

“And that seems to be what’s happened here.”

Prof Vladimiro Sassone, an expert in cyber-security, from the University of Southampton, said biometrics could, in general, be an effective security layer, but there were dangers if companies put too much faith in something that was not 100% secure.

“In principle there should be no room for error at all,” said Prof Sassone.

“It should be good at the first attempt.”

“Voice identification is not like a password system.”

“You can’t forget your voice or get the wrong one.

“After two attempts, systems should be able to say whether it’s a match or not and alert the bank and user if further attempts are made.”

Prof Sassone said using unique biometric traits as a verifier should make it harder for hackers – but if they should be copied by criminals, users could not then change their voice, face, or fingerprint as they would a password.

“If you have to prove it wasn’t you who accessed your account – that it was either a mimic or computer software – then how are you going to do that?” he asked.

“Especially if the bank is claiming the system is perfect.”

Image copyright

Image caption

HSBC said it used 100 different identifiers to fingerprint a customer’s voice

Security expert Prof Alan Woodward, from the University of Surrey, said it was dangerous to rely on one biological characteristic to authenticate someone, even if it was one unique to that person.

“Biometric based security has a history of measurements being copied,” he said.

“We’ve seen fingerprints being copied with everything from gummy bears to photographs of people’s hands.

“Hence, biometrics, just like other aspects of security, will always have to evolve as measures emerge to threaten them.

“Security is a story of measure and counter-measure.”

He said HSBC probably needed to reassess its technology and ideally add another “factor” alongside the voiceprint check to authenticate identity.

“As well as requiring something you are, it would require something you know or something you have, like a PIN,” he said.

“That makes it much more difficult to compromise.”

Image copyright

Image caption

Fingerprints have been copied using moulds made from gummy bear sweets

It is not just the ability of humans to fool computers that is worrying some high-tech companies.

Start-up Lyrebird is working on ways to replicate a voice using just a few minutes of recorded speech.

Co-founder Jose Sotelo said there was no doubt this had “implications” for voice identification systems.

“We are working with security researchers to figure out the best way to proceed,” he told Click.

“This is one of the reasons we have not published this to the public yet.

“It’s a scary application but we believe that we should be careful and should not be scared of technology and we should try to make the best out of it,” he said.

“One idea we are considering is to watermark the audio samples we produce so we are able to detect immediately if it is us that generated this sample.”

You can see the full BBC Click investigation into biometric security in special edition of the show on BBC News and on the iPlayer from Saturday, 20 May.

Get news from the BBC in your inbox, each weekday morning

Article source:

Cage director charged for failing to disclose passwords

CAGE International Director Muhammad RabbaniImage copyright

The director of campaign group Cage has been charged under the Terrorism Act.

Muhammad Rabbani faces a charge of failing to disclose his password after being detained at Heathrow Airport under counter-terrorism stop-and-search powers, the organisation has said.

Mr Rabbani was stopped at Heathrow in November, but refused to give officers access to his phone and laptop.

Cage describes itself as an independent advocacy group “working for those impacted by the War on Terror”.

The Metropolitan Police confirmed that Mr Rabbani, 36, attended an east London police station on Wednesday.

A spokesman for Cage said Mr Rabbani was charged with wilfully obstructing or seeking to frustrate a search examination under Schedule 7 of the Terrorism Act 2000 over the incident at the airport in November.

That law gives officers special powers to question and detain for up to six hours any individual passing through a UK port, airport, international rail terminal or border area.

Cage, whose main role is to support those who have been affected by UK counter-terrorism legislation, said Mr Rabbani had been released on bail and would be challenging the charge.

Article source:

Election candidates warned about phishing attempts

Inside the NCSCImage copyright
Getty Images

Image caption

Advice has been sent to general election candidates – including recent MPs

Candidates in the general election have been asked to look through their emails for signs that they have been targeted by a phishing attack.

The list of potential targets includes recent MPs.

The National Cyber Security Centre (NCSC), which is part of GCHQ, disclosed the request in a document released early on 16 May.

The BBC understands that the number of victims is currently understood to be in single figures.

Candidates have been asked to look for suspicious emails received after Jan 2017.

The NCSC declined to say if any data had been taken.

A report in the Financial Times said it was “likely” that the phishing campaign had been orchestrated by a state.

In a document titled Phishing: guidance for political parties and their staff, the centre says it has “become aware of phishing attacks to gain access to the online accounts of individuals that were MPs before dissolution of Parliament” and “other staff who work in political parties”.

Media captionTechnology explained: What is phishing?

The NCSC said the attacks were likely to continue “and may be sent to parliamentary email addresses, prospective parliamentary candidates, and party staff”.

‘Personal emails targeted’

The BBC understands that so far victims’ personal emails have been affected but no successful phishing attempts have been made via parliamentary email addresses.

It is believed that the NCSC has contacted the Electoral Commission about the threat and that the commission will help to alert candidates.

The centre said that potential victims should look out for “unexpected requests to reset your password for online or social media accounts (such as Apple, Google, Microsoft, Facebook or Twitter)”.

“Or you might have been asked to approve changes to your account that you’ve not requested.”

The NCSC did not say whether it knew who was behind the phishing campaign.

Image copyright
Getty Images

Image caption

Concern about phishing attempts has been on the rise lately

Analysis by Gordon Corera, security correspondent, BBC News

The warnings to political parties come as cyber-security officials brace themselves for some kind of incident during the elections.

No-one can be sure that anything will take place, but the experience of the US and more recently France has led them to believe that some kind of theft and then dump of information is possible.

In both those cases, a Russian hand is suspected.

Intelligence agencies have historically kept their distance from the communication of politicians due to the doctrine that says MPs should not be monitored.

But parties and politicians themselves have been asking for advice and guidance in recent months amid growing concerns.

Concern about elections being targeted by hackers has been running high, following the attack on the Democratic National Committee during the US presidential election.

US authorities attributed that incident to Russia and said that a significant component of the attack involved phishing.

More recently, the electoral campaign of President Emmanuel Macron in France was targeted by a similar campaign.

The NCSC has said the UK has “systems in place to defend against electoral fraud at all levels and [we] have seen no successful cyber-intervention in UK democratic processes”.

The BBC understands that since last month, the NCSC has delivered cyber-security seminars to the UK’s political parties, with the aim of helping them reduce the risk of succumbing to an attack.

Advice has also been offered to local authorities and the electoral commission.

Article source: