New updates that are being rolled out on the social media platform Twitter in order to make the network a safer place have not swayed investors.

Twitter shares dropped by more than 10 percent today after the social media company reported quarterly revenue that missed Wall Street’s expectations.

It also issued guidance that fell far short of estimates.

Twitter posted fourth-quarter earnings of 16 cents per share on revenue of $717m and adjusted its guidance for the first quarter to between $75m to $95m. Wall Street expected an estimate of $191.3m.

One of the reasons Twitter is struggling is due to the abusive trolls that plague the site. But with the introduction of better ways for users to report abusive Tweets and to stop the creation of new abusive accounts, the company is hoping it can attract more advertisement revenue.

Writing in a blog post, Twitter’s vice president of engineering, Ed Ho, said:

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

“We stand for freedom of expression and people being able to see all sides of any topic. That’s put in jeopardy when abuse and harassment stifle and silence those voices. We won’t tolerate it and we’re launching new efforts to stop it.”

The new measures announced this week will prevent people who have been suspended from the platform for abuse creating new accounts. A new safe search feature is being trialled to hide tweets which contain potentially sensitive content. These features will be rolled out across the platform over the next few weeks.

Platforms including Twitter and Facebook have been criticised in the past for allowing trolls and abusive accounts to flourish by not acting fast enough to prevent it.

Researchers from Cardiff University, at the Social Data Science Lab (SDSL), are setting up a new Centre for Cyberhate Research and Policy to help the UK government monitor hate crime on social media, particularly revolving around Brexit.

Recent figures from the Home Office showed that hate crimes rose by 41 percent after the Brexit vote.

Data from 31 police forces demonstrated that over 2,200 racially or religiously aggravated offences took place after the referendum.

As a result, the SDSL has been awarded £250,000 in grants from the Economic and Social Research Council for the project. It will focus on the development of a monitoring tool that can display a live feed of the propagation of hate speech on Twitter.

The tool will be made available to the UK government to identify areas that require policy attention and to improve interventions to stop hate crime from spreading.

Co-director of the SDSL at Cardifd University, professor Matthew Williams, said:

“Hate crimes have been shown to cluster in time and tend to increase, sometimes significantly, in the aftermath of “trigger” events. The referendum on the UK’s future in the European Union has galvanized certain prejudiced opinions held by a minority of people, resulting in a spate of hate crimes. Many of these crimes are taking place on social media.”

The team has begun collecting data over a 12-month period, starting with the 23 June 2016 when the UK voted to leave the EU, as a demonstrator of how a “trigger” event can lead to the spread of hate and xenophobia online. Machine learning technologies will be applied to classify, analyse and evaluate social media posts in real-time.

Co-director at SDSL, Dr Pete Burnap, said:

“To date the information available to government on topics such as hate speech around Brexit has been post-hoc and descriptive. What is needed are open and transparent methods that are replicable, interpretable and applicable in real-time as events are unfolding.

“We will be enhancing our existing language models using cutting edge computational methods to mine massive amounts of public reaction and provide meaningful insights into hateful and antagonistic commentary within minutes of an event occurring.”

The SDSL has working on several studies regarding the spreading of hate speech on social media, and has partnered with the Metropolitan Police Service and the Ministry of Justice in the past.