Accountability and Ethics in Data Science: Professional Ethics in contemporary Data Science practice

Executive Summary

This paper will discuss accountability, ethics and professionalism in data science (DS) practice, considering the demands and challenges practitioners face. Dramatic increases in the volume of data captured from people and things, and the ability to process it places Data Scientists in high demand. Business executives hold high hopes for the new and exciting opportunities DS can bring to their business, and hype and mysticism abounds. Meanwhile, the public are increasingly wary of trusting businesses with their personal data, and governments are implementing new regulation to protect public interests.  We ask whether some form of professional ethics can protect data scientists from unrealistic employer expectations and far reaching public accountabilities.

Demand for Data Science

Demand for DS skills is off the charts, as Data Scientists have the potential to unlock the promise of Big Data and Artificial Intelligence.

As much of our lives are conducted online, and everyday objects are connected to the internet, the “era of Big Data has begun.”(boyd & Crawford 2012). Advancements in computing power, and cheap cloud services mean that vast amounts of digital data are tracked, stored and shared for analysis (boyd & Crawford 2012), and there is a process of “datafication” as this analysis feeds back into people’s lives (Beer 2017).

Concurrently, Artificial Intelligence (AI) is gaining traction through successful use of statistical machine learning and deep learning neural networks for image recognition, natural language processing, and games and dialogue question and answer (Elish & boyd 2017).  AI now permeates every aspect of our lives in chatbots, robotics, search and recommendation services, automated voice assistants and self-driving cars.

Data is the new oil, and Google Amazon Facebook and Apple (GAFA) are in control of vast amounts of it. Combined with their network power, this results in super normal profits: US$25bn net profit amongst them in the first quarter of 2017 alone (the Economist 2017). Tesla, which made 20,000 self-driving cars in this time, is worth more than GM which sold 2.5m (the Economist 2017).

Furthermore, traditional industries such as government, education, healthcare, financial services, insurance, retailers, and functions such as accounting, marketing, commercial analysis and research who have long used statistical modelling and analysis in decision making are harnessing the power of Big Data and AI which supplements or replaces “complex decision support in professional settings (Elish & boyd 2017).

All these factors drive incredible demand from organisations, and results in a shortage of supply of Data Scientists.

Demand for Accountability

With this incredible appetite for and supply of personal data, individuals, government, and regulators are increasingly concerned about threats to competition (globally), personal privacy and discrimination, as DS, algorithms and big data are neither objective or neutral (Beer 2017) (Goodman & Flaxman 2016).  These must be understood as socio technical concepts (Elish & boyd 2017), and their limitations and shortcomings well understood and mitigated.

To begin with, the process of summarizing humans into zeros and ones removes context, therefore, contrary to popular mythology about Big Data, the larger the data set, the harder it is to know what you are measuring (Theresa Anderson n.d.; Elish & boyd 2017).  Rather, DS practitioner has to decide what is observed, recorded, included in the model, how the results are interpreted, and how to describe its limitations (Elish & boyd 2017; Theresa Anderson n.d.).

All too often, limitations in the data mean that cultural biases and unsound logics get reinforced and scaled by systems in which spectacle is prioritised over careful consideration”. (Elish & boyd 2017)

In addition, profiling is inherently  discriminatory, as algorithms sort, order, prioritise, and allocate resources in ways that can “create, maintain or cement norms and notions of abnormality” (Beer 2017) (Goodman & Flaxman 2016). Statistical machine learning scales normative logic (Elish & boyd 2017), and biased data in means biased data out, even if protected measures are excluded but correlated ones are included. Systems are not optimised to be unbiased, rather the objective is to have better average accuracy than the benchmark (Merity 2016).

Lastly, algorithms by their statistical nature are risk averse, and focus where they have a greater degree of confidence (Elish & boyd 2017; Theresa Anderson n.d.) (Goodman & Flaxman 2016), exacerbating the underrepresentation of minorities that exist in unbalanced training data (Merity 2016).

In response, the European Union announced an overhaul of their Data Protection regime from a Directive to the far reaching General Data Protection Regulation. Slated to be law by April 2018, this regulation protects the rights of individuals, including citizens right to be forgotten, and securely store their data, but also the right to an explanation of algorithmic decisions that significantly affect an individual (Goodman & Flaxman 2016). The regulations prohibit decisions made entirely by automated profiling and processing, and will impose significant fines for non-compliance.

Ethical Challenges and Opportunities for DS Practitioners

DS practitioners must overcome many challenges to meet these demands for accountability and profit. It all boils down to ethics. Data scientists must identify and weigh up the potential consequences of their actions for all stakeholders, and evaluate their possible courses of action against their view of ethics or right conduct (Floridi & Taddeo 2016).

Algorithms are machine learning, not magic (Merity 2016), but the media and senior executives seem to have blind faith, and regularly use “magic” and “AI” in the same sentence (Elish & boyd 2017).

In order to earn the trust of businesses and act ethically towards the public, practitioners must close the expectation gap generated by recent successful (but highly controlled) “experiments-as-performances”, by being very clear about the limitations of their DS practices. Otherwise DS will be snake oil, and collapse under the weight of the hype and these unmet expectations (Elish & boyd 2017), or breach regulatory requirements and lose public trust trying to meet them.

The accountability challenge is compounded in multi-agent, distributed global data supply chains, as accountability and control are hard to assign and assert (Leonelli 2016), the data may not be “cooked with care” but the provenance and assumptions within the data are unknown (Elish & boyd 2017; Theresa Anderson n.d.).

Furthermore, cutting edge DS is not a science in the traditional sense (Elish & boyd 2017), where hypotheses are stated and tested using scientific method. Often, it really is a black box (Winner 1993), where the workings of the machine are unknown, and hacks and short cuts are made to improve performance without really knowing why these work (Sutskever, Vinyals & Le 2014).

This makes the challenge of making the algorithmic process and results explainable to a human almost impossible in some networks (Beer 2017).

Lastly, the social and technical infrastructure grows quickly around algorithms once they are out in the wild. With algorithms powering self-driving cars and air traffic collision avoidance systems, ignoring the socio-technical context can have catastrophic results. The Überlingen crash in 2002 occurred because there was limited training on what controllers should do when they disagreed with the algorithm (Ally Batley 2017; Wikipedia n.d.). Data scientists have limited time  and influence to get the socio technical setting optimised before order and inertia sets in, but the good news is that the time is now, whilst the technology is new  (Winner 1980).

Indeed, the opportunities to use DS and AI for the betterment of society are vast. If data scientists embrace the uncertainty and the humanity in the data, they can make space for human creative intelligence, whilst at the same time respecting those who contributed the data, and hopefully create some real magic (Theresa Anderson n.d.).



Professions and Ethics

So how can DS practitioners equip themselves to take on these challenges and opportunities ethically?

Historically, many other professions have formed professional bodies to provide support outside of the influence of the professional’s employer. The members sign codes of ethics and professional conduct, in vocations as diverse as designers, doctors and accountants (The Academy of design professionals 2012; Australian Medical Association 2006; CAANZ n.d.).

Should DS practitioners follow this trend?

What is a profession?

“A profession is a disciplined group of individuals who adhere to ethical standards and who hold themselves out as, and are accepted by the public as possessing special knowledge and skills in a widely recognised body of learning derived from research, education and training at a high level, and who are prepared to apply this knowledge and exercise these skills in the interest of others. It is inherent in the definition of a profession that a code of ethics governs the activities of each profession.“ (Professions Australia n.d.)

The central component in every definition of a profession is ethics and altruism (Professions Australia n.d.), therefore it is worth exploring professional membership further as a tool for data science practitioners.

Current state of DS compared to accounting profession

Let us compare where the nascent DS practice is today with the chartered accountant (CA) profession. The first CA membership body was formed in 1854 in Scotland (Wikipedia 2017a), long after double entry accounting was invented in the 13th century (Wikipedia 2017b).  Modern data science began in the mid twentieth century (Foote 2016), and there is as yet no professional membership body.

Current CA membership growth rate is unknown, but DS practitioner growth is impressive. In 2016, there were 2.1M licensed chartered accountants[1] (Codd 2017). IBM predicts there will be 2.7M data scientists by 2020 (Columbus n.d.; IBM Analytics 2017), predicting 15% growth annually.

The standard of education is very high in both professions, but for different reasons. Chartered Accountants have strenuous post graduate exams to apply for membership, and requirements for continuing professional education (CAANZ n.d.).

DS entry levels are high too, but enforced by competitive forces only. Right now, 39% of DS job openings require a Masters or Ph.D (IBM Analytics 2017), but this may change over time as more and more data scientists are educated outside of universities.

The CA code of ethics is very stringent, requiring high standards of ethical behaviour and outlining rules, and membership can be revoked if the rules are broken (CAANZ n.d.) CAs must treat each other respectfully, and act ethically and in accordance with the code towards their clients and the public.

Lastly, like accounting, DS is all about numbers, and seems like a quantitative and objective science. Yet there is compelling research to indicate both are more like social sciences, and benefit from being reflexive in their research practices (boyd & Crawford 2012; Elish & boyd 2017; Chua 1986, 1988; Gaffikin 2011).   Also like accountants (Gallhofer, Haslam & Yonekura 2013), DS practitioners could suffer criticism for being long on practice and short on theory.

Therefore, DS should look hard at the experience of accountants and determine if, and when becoming a profession might work for them.

For and Against DS becoming a profession

It is conceivable that individually, DS practitioners could be ethical in their conduct, without the large cost in time and money of professional membership.

Data scientists are very open about their techniques, code and results accuracy, and welcome suggestions and feedback. They use open source software packages, share their code on sites like GitHub and BitBucket, contribute answers on Stack Overflow, blog about their learnings and present and attend Meet Ups.  It’s all very collegiate, and competitive forces drive continuous improvement.

But despite all this online activity, it is not clear whether they behave ethically. They do not readily share data as it is often proprietary and confidential, nor do they share the substantive results and interpretation. This means it is difficult to peer review or reproduce their results, and be transparent about their DS practices to ascertain if they are ethical or not.

A professional body may seem like a lot of obligations and rules, but by proclaiming their ethical stance, it could provide the data scientists some protection and more access to data.

From the public’s point of view, a profession is meant to be an indicator of trust and expertise (Professional Standards Councils n.d.). Unlike other professions, the public would rarely directly employ the services of a data scientist, but they do give consent for data scientists to collect their data (“oil”).

Becoming a profession could earn public trust and personal data (Accenture n.d.). It can also help pool resources and allow them to pursue initiatives that are altruistic and socially preferable (Floridi & Taddeo 2016), and actually makes for good leaders who can navigate conflict and ambiguity (Accenture n.d.), and result in good financial results (Kiel 2015).

With the growing regulatory focus on data and data security, it is foreseeable soon that Chief Data Officer and Chief Information Security Officer may be subject to individual fines and jail time penalties like Chief Executive and Chief Financial Officers are with regards to Sarbanes Oxley Act Compliance (Wikipedia 2017c). Professional membership can provide the training and support needed to keep practitioners up to date, in compliance and out of jail.

Lastly, right now, the demand for DS skills far outweigh supply. Therefore, despite the significant concentration in DS employers (in GAFA), the bargaining power of some individual data scientists is relatively high. However, they have no real influence over how their work is used: their only option in a disagreement is to resign.  Over the medium term, supply will catch up with demand, and then even the threat of resignation will become worthless.

In summary

Steering the course of DS practice towards ethical outcomes is easiest at the outset (Winner 1980), however it is highly unlikely DS practitioners will stand up to their employers and voluntarily band together to create a professional membership body in the immediate future.

Professional ethics can protect data scientists from unrealistic employer expectations and far reaching public accountabilities, but the organisational effort may come too late.

Regulatory pressure that counters the power of GAFA may create the force for change, but more likely professional indemnity insurers and legal liability cases will eventually force sole traders and small to medium businesses to band together as a professional body to shoulder the responsibility of public accountability and earn the right to their data.








Accenture n.d., ‘Data Ethics Point of view’,, viewed 12 November 2017, <>.

Ally Batley 2017, Air Crash Investigation – DHL Mid Air COLLISION – Crash in Überlingen, viewed 20 November 2017, <>.

Australian Medical Association 2006, ‘AMA Code of Ethics – 2004. Editorially Revised 2006’, Australian Medical Association, viewed 20 November 2017, <>.

Beer, D. 2017, ‘The social power of algorithms’, Information, Communication & Society, vol. 20, no. 1, pp. 1–13.

boyd,  danah & Crawford, K. 2012, ‘Critical Questions for Big Data’, Information, Communication & Society, vol. 15, no. 5, pp. 662–79.

CAANZ n.d., ‘Codes and Standards | Member Obligations’, CAANZ, Text, viewed 20 November 2017, <>.

Chua, W.F. 1988, ‘Interpretive Sociology and Management Accounting Research- a critical review’, Accounting, Auditing and Accountability Journal, vol. 1, no. 2, pp. 59–79.

Chua, W.F. 1986, ‘Radical Developments in Accounting Thought’, The Accounting Review, vol. LXI, no. 4, pp. 601–33.

Codd, A. 2017, ‘How many Chartered accountants are in the world?’,, viewed 20 November 2017, <>.

Columbus, L. n.d., ‘IBM Predicts Demand For Data Scientists Will Soar 28% By 2020’, Forbes, viewed 20 November 2017, <>.

Data Science Association n.d., ‘Data Science Association Code of Conduct’, Data Science Association, viewed 13 November 2017, </code-of-conduct.html>.

Elish, M.C. & boyd,  danah 2017, Situating Methods in the Magic of Big Data and Artificial Intelligence, SSRN Scholarly Paper, Social Science Research Network, Rochester, NY, viewed 19 November 2017, <>.

Floridi, L. & Taddeo, M. 2016, ‘What is data ethics?’, Phi.Trans.R.Soc.A, no. 374:20160360.

Foote, K.. 2016, ‘A Brief History of Data Science’, DATAVERSITY, viewed 21 November 2017, <>.

Gaffikin, M. 2011, ‘What is (Accounting) history?’, Accounting History, vol. 16, no. 3, pp. 235–51.

Gallhofer, S., Haslam, J. & Yonekura, A. 2013, ‘Further critical reflections on a contribution to the methodological issues debate in accounting’, Critical Perspectives on Accounting, vol. 24, no. 3, pp. 191–206.

Goodman, B. & Flaxman, S. 2016, ‘European Union regulations on algorithmic decision-making and a ‘right to explanation’’, arXiv:1606.08813 [cs, stat], viewed 13 November 2017, <>.

IBM Analytics 2017, ‘The Quant Crunch’, IBM, viewed 20 November 2017, <>.

Kiel, F. 2015, ‘Measuring the Return on Character’, Harvard Business Review, viewed 13 November 2017, <>.

Leonelli, S. 2016, ‘Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems’, Phil. Trans. R. Soc. A, vol. 374, no. 2083, p. 20160122.

Merity, S. 2016, ‘It’s ML, not magic: machine learning can be prejudiced’,, viewed 19 November 2017, <>.

Professional Standards Councils n.d., What is a profession? | Professional Standards Councils, viewed 19 November 2017, <>.

Professions Australia n.d., What is a profession?, viewed 21 November 2017, <>.

Sutskever, I., Vinyals, O. & Le, Q.V. 2014, ‘Sequence to Sequence Learning with Neural Networks’, arXiv:1409.3215 [cs], viewed 4 November 2017, <>.

The Academy of design professionals 2012, ‘The Academy of Design Professionals – Code of Professional Conduct’,, viewed 13 November 2017, <>.

the Economist 2017, ‘The world’s most valuable resource is no longer oil, but data’, The Economist, 6 May, viewed 19 November 2017, <>.

Theresa Anderson n.d., Managing the Unimaginable, viewed 19 November 2017, <>.

Wikipedia 2017a, ‘Chartered accountant’, Wikipedia, viewed 21 November 2017, <>.

Wikipedia 2017b, ‘History of accounting’, Wikipedia, viewed 21 November 2017, <>.

Wikipedia 2017c, ‘Sarbanes–Oxley Act’, Wikipedia, viewed 21 November 2017, <>.

Wikipedia n.d., Überlingen mid-air collision – Wikipedia, viewed 20 November 2017, <>.

Winner, L. 1980, ‘Do Artifacts Have Politics?’, Daedalus, vol. 109, no. 1, pp. 121–36.

Winner, L. 1993, ‘Upon Opening the Black Box and Finding It Empty: Social Constructivism and the Philosophy of Technology’, Science, Technology, & Human Values, vol. 18, no. 3, pp. 362–78.

[1] not including unlicensed practitioners such as bookkeepers, or Certified Practicing Accountants

Anthropomorphising the algorithm

leading on from my last blog post conclusion that holding algorithms accountable is a bit of a daft idea, I want to thank Richard Nota for this wonderful comment on The Conversation article posted by Andrew Waites in our slack channel

Richard Nota

The ethics is about the people that oversee the design and programming of the algorithms.

Machine learning algorithms work blindly towards the mathematical objective set by their designers. It is vital that this task include the need to behave ethically.

A good start would be for people to stop anthropomorphising robots and artificial intelligence.

Anthropomorphising…. I had to google to see if that is even a word (it is).
But that is exactly what I believe needs to happen, stop anthropomorphising algorithms.
As Theresa puts it, they are part of the infrastructure, and once let loose into the wild, they can be made extremely inflexible if they are not created with care and managed appropriately.
Its up to the humans to manage the ethical implications of the algorithms in their systems.
Anyway, the article was written by Lachlan McCalman who works at Data61 and he makes some very good arguments.
He points out that making the smallest mistake possible does not mean NO mistakes.
Lachlan describes 4 errors and how the algorithm can be designed to adjust for these.
1. Different people, different mistakes
There actually can be quite large mistakes for different subgroups that offset each other.  In particular for minorities, where because there are few examples, getting their predictions wrong doesnt penalise the results too much.
I know about this already due to my favourites Jeff Larsson at ProPublica and the offsetting errors in the recidivism prediction algorithm in False negatives and positives for white and black males. Im sure you can work out who was the false negative (incorrectly predicted will not reoffend)  vs false positive (incorrectly predicted will re-offend).
Lachlan suggests to fix this, the algorithm would need to be changed to care equally about accuracy for the sub groups.
2. The algorithm isn’t sure
Of course, its just a guess, and there are varying degrees of uncertainty.
Lachlan suggests the algorithm could allow for giving the benefit of the doubt where there is uncertainty.
3. Historical bias
this one is huge. of course patterns of bias become entrenched if the algorithm is fed biased history.
So changing the algorithm (positive discrimination perhaps) to counter this bias would be required.
4. Conflicting priorities
trade offs need to be made when there are limited resources.
Judgement is required, with no simple answer here.
In conclusion, Lachlan proposes there needs to be an “ethics engineer” who explicitly obtains ethical requirements from stakeholders, converts them into a mathematical objective and then monitors the algorithms ability to meet that objective when in production.

About algorithms being black boxes

For 36111 Philosophies of Data Science Practices’ first assignment, I am exploring the emerging practice of holding algorithms accountable.

Often, people refer to algorithms as black boxes.

There are three different definitions of a black box, according to merriam webster:

Definition of black box

1 :a usually complicated electronic device whose internal mechanism is usually hidden from or mysterious to the user; broadly :anything that has mysterious or unknown internal functions or mechanisms
2 :a crashworthy device in aircraft for recording cockpit conversations and flight data
3 :a device in an automobile that records information (such as speed, temperature, or gasoline efficiency) which can be used to monitor vehicle performance or determine a cause in the event of an accident


Usually, when people refer to algorithms, they classify them as type 1 black box.  So what does that imply about how we interact with these black boxes? Its something mysterious that mungifies inputs and turns them into instructions you blindly follow?

If you treat algorithms like this, you may end up opening up a type 2 black box.

Let me explain what I mean with an example, courtesy of Air Crash Investigations tv series (see the episode perhaps illegally uploaded to YouTube here).

In 2002, two planes collided mid air over Überlingen in Germany , tragically killing everyone on board, mostly children. Afterwards, the devastated air traffic controller was murdered in his front garden by a father driven mad by grief who lost his  entire family in the crash (Wikipedia). Absolutely awful.

One of the contributing factors to this disaster was confusion in the human/computer interaction in the use of the Traffic Alert and Collision Avoidance System (TCAS) (see Kuchar and Drumm for how it works). TCAS is basically is a system of sensors and algorithms, that alert and advise pilots of what action to take to avoid collisions . In this incident, there was conflict between the instructions of TCAS and the air traffic controller. One pilot followed TCAS, the other air traffic control, so they both descended, ultimately ending in tragedy.

TCAS software itself did not fail, but as there was no international code on what to do in these circumstances, the overall system failed. The supporting infrastructure was not there.  The human computer interaction was not adequately considered and training. A previous incident in Japan (Wikipedia) had been reported to the International Civil Aviation Authority but no action had been taken. (If that crash had occurred, 677 people would have died, and it would’ve been the largest toll ever).

So my work is going to consider not just countering machine bias in the algorithm itself, but also considering the context in which it is used, and whether this is appropriate.

At the end of the day, holding an algorithm accountable is actually a ludicrous concept. It can only be the humans who are accountable.

On countering machine bias

ProPublica have a whole section dedicated to this topic. So glad to see this, and it appears they have covered insurance companies charging higher premiums in minority neighbourhoods, which I always suspected was happening. Cant wait to read that!

This is a topic for another blog post!

Data Science Ethics: my initial thoughts

I had two main thoughts about this: self regulation by the data science profession, and data literacy.

The promise of big data and artificial intelligence is at an all time high, but by no means at its peak. The availability of data to mine is growing exponentially. And yet the data science community is still relatively small (compared with say, accountants, or bankers) and focused on scientific techniques .

Data science is making immense changes to the way people live, that will impact generations to come.

Reading these articles made me wonder, are data scientists proactively managing the ethical ramifications of the data they create, the algorithms they build, and the decisions made on the basis of their work?

This is a pivotal time in the evolution of data science ethics.

Data Scientists must establish strong ethical foundations in their profession, to ensure data science is used to make the world a better place, and before the profession gets over regulated by government if they dont do their part voluntarily.

As I explain in a past blog post, even Facebook is recognising that they are not just a technology tool, but make a real impact on the world:

Is now a good time for the profession to become a self regulating membership body?

Will auditors soon start to audit machine learning algorithms? (They should!)

I came across this code of conduct

Data literacy is also an interesting counterpoint to all of this.

I dont think it will be long before the general populace will revolt against organisations careless with their data, and opaque algorithms determining their fate in a way NOONE can explain.  People dont have blind faith anymore.

The University of Washington is now offering this course: “Calling bullshit”  to improve the quality of science.


In the mid nineties, I read Wild Swans, an autobiographical story about three generations of Chinese women (the last being the author Jung Chang) spanning about 100 years. If you want the abridged version, you can read it here in Wikipedia

After reading what they endured being on the losing side of a war, and then being under Communist rule, I’m certain those three daughters of China would warn us to guard our personal information closely, and watch how its being used against us. Random pieces of data given away here and there, could become information weapons in the wrong hands, and not just for us but for our descendants.

This is just one of the many sources of a general feeling of foreboding that I have about my personal data.

The other forces that make me think a slow train wreck is coming:

  • Ease of dissemination of “information” due to social media
  • Growing ease of storage
  • inability to destroy your own data, its immutable
  • diminishing interpretability of results


Below are some notes from the articles


privacy anonymity transparency trust and responsibility concern data collection curation analysis and use

What is data ethics?

Floridi and Taddeo talk about three axes of data science ethics

Data ethics concerns the generation recording curation processing dissemination sharing and use of the data

Data science ethics is what is done with the data ie the ethics of the algorithms and the ethics of the practices.

regarding the algorithms, auditing the outcomes against a gold standard is esssential, to ensure it is achieving  sensible and ethical results

creating a professional code of conduct to ensure ethical practices

3 Key Ethics Principles for Big Data and Data Science

Jay Taylor

collect minimal and aggregate

identify and scrub sensitive data

have a crisis management plan in place in case your insight backfires

above all, teach ethics!








According to Mark Zuckerberg, Facebook is not a media company

According to Mark Zuckerberg, CEO , Facebook, the world’s largest social media platform[i] is not a media company[ii].

Zuckerberg explained in August 2016: “No, we are a tech company, not a media company…..We build the tools, we do not produce any content..”[iii]

One of those tools is the Facebook News Feed, which provides every one of the almost 2bn[iv] monthly active users a hyper- personalised news stream: “…an algorithmically generated and constantly refreshing summary of updates …”  [v] from friends and any other page a user follows, plus targeted ads and Page suggestions from Facebook.  There is also the Trending module on the right hand side of the Facebook user home page, which surfaces news stories and is entirely created by an algorithm[vi].

How Facebook News Feed works

The Facebook algorithm is complex but it essentially works by identifying key features of a post i.e. is it a video, who posted it, how often was it shared and by whom, and also uses natural language processing to identify the text, topics and sentiments within the post.

Then, in order to present relevant content to the specific user, Facebook analyses the past behaviour of the user and other users across hundreds of factors, then predicts the likelihood that the user will engage with this piece of content because they or people like them previously engaged with this content type and topic. This likelihood, combined with the age of the content and how popular it is across the network is its News Feed Rank score. Content is then selected and sorted so that the highest ranked content is first in the news feed, and then presented in descending order.

The Facebook algorithm is constantly being tweaked by Facebook through unsupervised machine learning, supplemented by the analysis of their team of data scientists, and qualitative feedback from dedicated user focus groups.[vii][viii]

Benefits of Facebook News Feed

Using unsupervised text analysis and machine learning algorithms to find and serve up content to the specific user has a lot of benefits, as such hyper-personalisation can be performed economically at scale, giving huge international reach for content creators, publishers, and interest groups.

Users are served up content that has a high probability of being from like-minded people, brands and groups, without having to search for it themselves (although that too is possible, utilising text analysis and search tools).

Brands and groups can quickly gain followers or reach a large audience if they know how to use the system, which is a great platform for brand awareness or for non-mainstream/minority causes to publish and broadcast their views.

In this regard, the Facebook News Feed provides the promise of freedom of speech and capitalist marketplace for its users, as does the internet as a whole:

“What is driving the Net is the promise of political efficacy, of the enhancement of democracy through citizens’ access and use of new communications technologies.”[ix]

Facebook as a technology company build the tools, and then content creators and publishers use the platform and the News Feed algorithm to find an audience for their content.  Facebook is the neutral, laissez faire “marketplace”, with community guidelines to prevent hate and crimes from being encouraged[x].

Downsides of Facebook News Feed

However, recent events have highlighted some of the flaws in the News Feed algorithm and the processes for dealing with errors in it. In the recent US election, it was uncovered that fake news sites were being promoted in peoples feeds to gain advertising revenue[xi]. The algorithm currently cannot identify legitimate news sites and satirical and/or fake sites. Facebook also have not developed their automated monitoring systems, or escalation workflows at the same rate as their automated products, and just this week a horrific video of a man murdering another man in cold blood remained on the site for 3 hours after it was initially reported[xii].

It is becoming increasingly difficult for Facebook to argue that it is not a media company, or that it does not have a responsibility to its users and the community for how its tools are used.

Facebook and its newsfeed algorithm are under pressure to assure the community that they are not proliferating fake news, manipulating their users emotions [xiii], promoting hate, discouraging respect or dialogue by seeing both sides of a debate[xiv], or broadcasting violent and terrible video and taking too long to remove it[xv].  Even more so, they are under pressure from their advertisers to ensure their brands are not placed next to such content. Some advertisers have recently pulled advertising from Google and Youtube and Facebook are very aware they could be next[xvi].

In addition, for Facebook’s users, the algorithm is not transparent and not able to be re-set or customised or trained by the user. Users can find it frustrating and feel like they are stuck in an echo chamber, where they are open to manipulation by Facebook, lobby groups or unscrupulous advertisers who know how to game the algorithm.

“What if people “like” posts that they don’t really like, or click on stories that turn out to be unsatisfying? The result could be a news feed that optimizes for virality, rather than quality—one that feeds users a steady diet of candy, leaving them dizzy and a little nauseated, liking things left and right but gradually growing to hate the whole silly game.” [xvii]

The Verdict

On balance, I think the benefits of the Facebook News Feed algorithm and natural language processing outweigh these costs. Facebook is still very much listening to their users and aware that there is intense competition for their attention, and therefore are constantly working to improve the algorithm and their products.

For example, in January 2017 Facebook made changes to the Trending module to only show trusted news sources[xviii], in April 2017 implemented a button to report possible fake news stories, and have established a user group to provide real human feedback on the algorithm.

Facebook recently announced a project with esteemed journalist Jeff Jarvis and CUNY to build  relationships and support credible journalism. [xix]

Even Mark Zuckerberg CEO of Facebook is changing his tune.  In December 2016 he said,

“Facebook is a new kind of platform. It’s not a traditional technology company…It’s not a traditional media company. You know, we build technology and we feel responsible for how it’s used.”[xx]

Which is just as well, because whilst he might not want to admit he is a media company, 2bn users a month use Facebook for their news, and if Facebook doesn’t act responsibly, legislators will eventually catch on that Facebook and social media is very much key to the worlds global media ecosystem.

End notes

[i], Facebook. [ONLINE] Available at: [Accessed 17 April 2017].

[ii], Giulia Segreti. 2016. Facebook CEO says group will not become a media company. [ONLINE] Available at: [Accessed 17 April 2017].

[iii], Giulia Segreti. 2016. Facebook CEO says group will not become a media company. [ONLINE] Available at: [Accessed 17 April 2017].

[iv], Facebook. [ONLINE] Available at: [Accessed 17 April 2017].

[v], Timeline of Facebook. [ONLINE] Available at: [Accessed 17 April 2017].

[vi], Facebook fires trending topics team [ONLINE] available at: “ [Accessed 17 April 2017].

[vii], How Facebook’s news feed algorithm works [ONLINE] Available at [Accessed 17 April 2017].


[viii], Ultimate guide to the Facebook News Feed [ONLINE] Available at [Accessed 17 April 2017].


[ix] Dean, Jodi (2005), “Communicative Capitalism: Circulation and the Foreclosure of Politics,” Cultural Politics 1(1): 62.


[x] Facebook, Controversial, Harmful and hateful speech on Facebook [ONLINE] Available at [Accessed 17 April 2017].


[xi] How Facebook helped Donald Trump become president [ONLINE] Available at[Accessed 17 April 2017].


[xii], 2017. Murder video forecasts scrutiny at Facebook [ONLINE] Available at [Accessed 20 April 2017]


[xiii] Facebook reveals news feed experiment to contol emotions [ONLINE] Available at [Accessed 17 April 2017].



[xiv] Financial Times, Facebook and the manufacture of consent [ONLINE] Available at

[Accessed 17 April 2017].


[xv], 2017. Murder video forecasts scrutiny at Facebook [ONLINE] Available at [Accessed 20 April 2017]


[xvi] Google pledges more control for brands over ad placement [ONLINE] Available at [Accessed 17 April 2017].


[xvii] How Facebook’s news feed algorithm works [ONLINE] Available at [Accessed 17 April 2017].


[xviii] Facebook fake news trending algorithm [ONLINE] Available at [Accessed 17 April 2017].

[xix]  Facebook Friends media journalism project [ONLINE] Available at [Accessed 17 April 2017].


[xx], Josh Constine, Zuckerberg implies Facebook is a media company, just not a traditional media company [ONLINE] Available at [Accessed 17 April 2017].



Is there a sexist data crisis? Hardly a crisis, but still important to resolve

In our session on Tuesday Simon K, as an aside, suggested we google “is there a sexist data crisis. ”

I did, (here is a BBC article with that exact title but it got me thinking, this is hardly a crisis and hardly new. Women are underrepresented in many important things.

For example, did you know women (and other “minority groups” like non Caucasians) are underrepresented in clinical trials? The article does mention this too. The FDA in the US has a program to try to increase the participation of women in these trials:

Systematic bias? Deliberate? Could be both.

Anyway, I will be sure to think more about it.





Missing data codification OR how to capture that slap across the face

Last week we read about missing data and how to plan for it. I found it super useful and applicable to our Quantified Self work- if we had codified our missing data I would have had less chasing up to do, and we would have had some insights into the boundaries of what we were willing to share with the group.

I found this youtube video pretty informative in explaining the types of missingness, ie when something was missing that was either not related to the variable in question, or it was, or it was related to some other variable or combination of variables. And how to capture exactly WHY it was missing in your surveys as this in itself was very useful information. ie to convey the slap across the face “How very dare you ask me that?!” in a survey monkey form..

Speaking of missing, I have missed a lot of opportunities to blog, including the Unearthed Hackathon experience. I hope to get to that soon!