Facebook is one of the quintessential platforms that helped the Web 2.0 movement take off in the early 2000s and into today. The platform brings over a billion people to the web to connect socially, sharing their feelings, personal information, and more with friends, family, groups, and corporations. With all of this at stake, Facebook faces a great deal of scrutiny. This week Facebook is once again in the news and not in a very positive light.
Despite their best intentions, the company has again been caught messing with its users’ privacy and data. Whenever Facebook makes changes to its privacy settings or the public learns more about how their data is used, it almost always ignites condemnation. This time Facebook may have gone too far.
In an effort to understand Facebook’s impact on users, the company secretly went about tweaking the algorithms of hundreds of thousands of users to see if they could discover how much of an impact what a user sees on his or her feed has on their mood. The theory is that the more happy posts a person sees, the more likely they will be to share happy posts themselves. The opposite is likely true as well.
TIME Magazine recently published an article addressing an aspect of this story which hasn’t been discussed in great detail by critics or supporters. Author Janet Vertesi takes issue with the very idea of a private corporation conducting research in what has previously been the task of the social sciences.
Ultimately at issue for Janet is the changes in social science funding and the fact that corporations should not be at the forefront of this sort of research. She writes: “Facebook isn’t manipulating its users any more than usual. But the proposed changes in social science funding will have a more lasting effect on our lives both online and offline.”