Roundtable on Human Rights and the Business of Social Media: City University Hong Kong 25 June 2018

 

 

 

Date and time: Monday, 25 June 2018 (9:30am – 12:30pm)

Venue: P5401, 5/F, Yeung Kin Man Academic Building, City University of HK

The Human Rights Law and Policy Forum (HRLF) of the School of Law of City University of Hong Kong will organise a Roundtable on “Human Rights and the Business of Social Media” on 25 June 2018.

Against the backdrop of recent controversy regarding Facebook’s failure to protect the personal data of its users and concerns related to the spread of ‘fake news’ or ‘religious hatred’ on social media platforms, this Roundtable will explore how social media is impacting the existing human rights discourse and what new challenges it is posing to the conventional approaches of regulating the behaviour of both individuals and businesses.

A panel of experts will lead discussion on issues such as the following: (1) Is social media ‘media’? How social is it?; (2) How should social media deal with different shades of truth? (3) What are the positive and negative human rights impacts of social media? (4) What are the rights and responsibilities of social media companies? and (5)How to regulate social media: what role for states, businesses and individuals?

PowerPoints:

Larry Catá Backer: Access HERE < BHRWorkShopCUHK6-2018.>

Roundtable Report Follows:

I was delighted to be part of the Roundtable on Human Rights and the Business of Social Media, sponsored by the Human Rights Law and Policy Forum (HRLF) of the Law Faculty of the City University of Hong Kong. My thanks to Surya Deva for organizing what turned out to be a fascinating meeting with lots of quite powerful and thought provoking engagement.
This Post includes the Roundtable Program and my own summary notes of the discussion during the Roundtable.

Roundtable on Human Rights and the Business of Social Media <ACESS HERE>

Date and time: Monday, 25 June 2018 (9:30am – 12:30pm)
Venue: Law School Conference Room (P5401), 5th Floor, Yeung Kin Man Academic Building City University of Hong Kong

Programme

09:30am–09:35am  Welcome remarks
Surya Deva, Associate Director, Human Rights Law and Policy Forum (HRLF)

Prof. Deva introduced the work of the Human Rights and Policy Forum.

Human Rights Law and Policy Forum (HRLF) brings together multidisciplinary experts on human rights from the School of Law. The Forum provides a platform for discussion and knowledge exchange, and develops global collaborations with the aim of establishing itself a preeminent human rights institution in Asia. HRLF is organized around four themes: human rights in Hong Kong and Asia, criminal justice, business and human rights, and the sustainable development goals (SDGs).

He emphasized the open textured format–a brainstorming event. He welcomed all participants.

09:35am–10:50am  Session I: How is Social Media Shaping Identities, Relationships, Media, Human Rights, Businesses and States?

Discussion Leads:

Sarah Joseph, Professor and Director of the Castan Centre for Human Rights Law, Monash University, Australia

Jernej Letnar Černič, Associate Professor, Graduate School of Government and European Studies, Slovenia

Prof. Joseph set the start for the discussion. She  started with a set of holistic reflections on the major themes that criss-cross this area.  Human rights has been good for the global democratization of speech.  No guarantees that anyone will listen but at least speech is far less mediated by powerful actors–the state, other institutions, etc. Downside are privacy issues.  Brought an increase in accountability for speech.  The freer open is to speak the more consequences there may be for speech. Example–what happened to Roseanne Barr; said it on social media and lost her show.  Not most sympathetic figure but is happening with increasing frequency for far less egregious expressions.

Prof. Joseph also spoke to social media relationships. She noted the challenges of data driven targeting–especially by social media entities.  We move here from Cambridge Analytica scandal to the creation of engaged community. It can create or divide communities.  As to media and business, Prof. Joseph noted that there was a slow response to the new social media culture.  Mainstream media is still trying to adjust.  But it has created some new issues–fake news (study noted that fake news spread ata  faster rate than “real” new).  This is a function of the role of “bots”; but it may also be a function of the blending of news information and entertainment. Business is hit or miss with its use of social media.  Can make business more accountable, but it can produce disciple through brand management. And on States, Prof. Joseph the opportunities and challenges for official use of social media by state officials.

Prof Cernic  added context to the framing undertaken by Prof. Jospeh, especially int he context of developing states.  He started with a discussion of core values–control and power over communication. Absence of arbitrariness of power is a key concept; using communication to foster rule of law by shifting control of discursive space from control by states or powerful institutional actors to mass and dispersed control. Noted the EU constitutional rights to access information as a means of dispersing power over communication.  But of course there are some drawbacks.  Many non Western states have had bad experiences.  Indirect supervision by states and other entities van be troubling.  A new technologies of state supervision that may thwart the shifting of power over discursive space. This affects the character and vectors of accountability with political consequences. Individual may produce communication but are traced everywhere by enterprises and states. Sp that this undermines the rule of law effects of democratization of communication through forward looking tech. But states can also help; e.g., EU general data protection regs. So one is left with a balancing.  For every opportunity produced by social media in the political, economic and cultural sphere, there appears to be a challenge as well by key institutional actors.

Questions / Comments

Participates raised a number of questions and issues after the broad introduction. Social media is both communication and data collector mechanism.  People like the communication part but like less the data harvesting character. Scale and use of collection poses substantial new problem.  What one can do with the data remains a frontier. These are worrying trends–especially because both one doesn’t know where technology us going but also the regulatory lag attendant on these advances. All of this goes, to some extent to the quality of deliberation–more speech constrained by the forms of social media may not further the ideals of democratic or social discourse that advances social and political principles.  This is especially the case as AI begins to replace individuals as speech participants–grounding their speech on the algorithmically developed implications of harvester data, that then affects the character of data harvested going forward. But that lack of potential alignment remains substantially unexplored.
Another set of issues raised centered on the psychological impact of social media.  Apple offering controls on the use of phone.  Are there medical correlations between mental illness and social media use.  But should social media providers or tech enterprises control the means of constraining  likelihood of actions that may bring on illness? And again, there is a regulatory lag in this area. Consider the transformation of individuals from persons to brands–the recent issue of people as Instagram brands, and the problem of not being able to live up to the illusion of the brand projected. Does this implicate the global human right to health? But if so, who ought to bear the responsibility for mediating this issue?

Social media and misinformation and social media and too much information are two issues that may touch on the right of individuals to information. Much information is now controlled by the private sector and the combination of fake and too much information may make issues of the quality of information problematic. To what extent might this affect economic decision making.  On the one hand, social media has been instrumental in developing social power (2nd Pillar UNGP), and yet on the other there is an issue of constraining principles.

 The other issue touches on hate speech. Who defines it in the social media context.  Context and selectivity could than skew the scope of discourse.  Who gets to decide?   Consider Twitter which appears to take white supremacy seriously (an American issue of the moment) but worries less about hate speech that deals with discriminatory structures indigenous to developing states that do not involve developed states privilege structures.

And then there is the problem of the algorithm.  These may reinforce political bias, but may also detach the course of discourse from individuals to mass managed data driven formula. If that detachment is real, to what extent to individuals lose control of their autonomy? Worse, perhaps is the reductionist element of content–discourse by meme has become both a liberating and distorting element of communication.  There is no way to deal with complex issues and policy choices via meme or “3 minute reads.”

Human rights impacts touch both on the problems caused by or through social media (the “addiction” issue) but also by the choices made by actors to respond to these problem (eugenics and social media).   Will psychologists now govern social media rather than politicians or business people? If that is the case is there a greater problem of human rights by limiting human autonomy in the name of constructed medical regimes?

And then back to privacy.  It was noted that Baidu’s founder noted that Chinese people were not sensitive to privacy issues because they were culturally comfortable exchanging privacy for convenience. This may be a global reality among many.  Should companies and the state be permitted to make rules or regulations based on these cultural notions or should individuals drive these self assessments?  Is it presumptuous for institutions to make these assumptions? Should there be accommodation for outliers? Should Human rights in this case be solicitous of the outlier or protective of the majority and the majority’s tastes. Sometimes these media employees may be a source for protection against human rights wrongs committed by enterprises–e.g., Thousands of Google Employees Protest Company’s Involvement in Pentagon AI Drone Program.

Is it necessary to unpack social media?  That complicates analysis and regularity (and self regulatory) responses.  There are vest differences between Google, Baidu, Tweeter and Facebook and Wechat. This makes analysis more complicated, but also complicates regulatory approaches as well as the scope of human rights discourse, monitoring and mediation. The definitional issue has a strong impact on on the scope and approaches to regulation, including self-regulation. But there is power in definition that itself is an assertion of power to impose meaning that may go toward underlying ideologies of politics than “facts.”

There is an irony that social media companies, and especially companies that manage content or produce it, tend to shy away from the openness and transparency they encourage for the use of their services. It was noted that Zuckererg’s appearance before the EU Parliament was conditioned on substantial control of the questions to be asked and that the sessions be held behind closed doors. This disjunction between the cultures of social media and the business conduct of social media providers suggests an area of investigation.

The issue of corporate values–do corporations have values–generated substantial discussion.  The business case for corporate values can reduce values that mirror markets.  On the other hand if corporate values become important even in this respect, it suggests the possibility of 24-7 monitoring and social control in ways that ought to be problematic.  Should individuals be subject to 24-7 monitoring for compliance with values? Should private activities result in professional consequences for employees? Is this itself a human rights violation of autonomy? The hard question–are there public consequence free zones?  Does this suggest distinctive roles for public and social regulation?  But here the problem may be more subtle–to some extent government may be necessary to protect individuals form social consequences where these consequences may impact individual human rights.

 The issue of human rights versus risk management was suggested. There are profound differences.  Risk minimization may make plausible regimes where human rights can be waived.  But there may be a human right to make that bargain.  Are there limits?  These are issues that involve line drawing.  And that may b¡e the ultimate reduction of this conversation–discovering the spaces where lines are drawn than then analyzing the politics behind the principles through line lines are drawn by communal representatives and then imposed on communities, even those who find the lines obnoxious.

And, indeed, what does one do with the “alt” communities? On the one hand these cross cultural and political taboos.  On the other hand, ought there to be a space where such taboos may be crossed? Is this free or managed space?  What does one do with the spillover effects of these spaces?  Here one confronts the problem of the politics of tools, of methods, or the willingness of communities to permit the use of social media in particular ways to advance particular political goals or class, while condemning the  exact same use of those tools to further the agendas of other political goals or classes.

10:50am–11:10am  Tea break

11:10am–12:30  Session II: Human Rights Responsibilities of Social Media Companies and
Regulation/Moderation of Content Discussion Leads:Larry Backer, Professor of Law and International Affairs, Penn State University, USA

Surya Deva, Associate Professor, School of Law, City University of Hong Kong, Hong Kong

Backer considered the environment within which it is possible to speak to the task of constructing a regulatory environment.  He suggested the scale of the complexity of the project in which every individual is both the object of protection and a bearer of responsibility or duty to protect or respect human rights. developed a set of considerations around a PowerPoint that can be accessed HERE: BHRWorkShopCUHK6-2018. He noted the challenges of any approach to social media governance within a context in which every actor is both an actor against which the law must protect and an actor protected by regulation. Within that context multiple governance standards have made coherent regulation difficult. The wide variation in mechanics both of social media outputs and regulation make simple governance impossible. To these are added the complications of social media as a business, a tool, and as community space. Controlling content is unhelpful as a principal basis for governance; and managing distribution alone will not be helpful. He suggested a possible template for regulation based on a distinction in regulation between content related issues and distribution related issues borne by both the companies and third parties.

Professor Deva spoke to issues of regulatory assumptions underlying approaches to change. Regulatory assumption: the state sits on top of a vertical arrangement with media companies as a gatekeeper and then individual on the bottom as consumers rather than creators of information.  But this traditional regulatory paradigm is giving way to a new paradigm.  First, everyone is both the subject and object of regulation. Second, states can misuse social media for their own ends including those that may infringe on human rights social protest as terrorist activity); vesting states with sole regulatory authority may not be a great idea. Third, the very nature of the business is transnational, national regulation not particularly efficient. Effective jurisdiction may be a function of national Example of WeChat–China asserts that Chinese citizens must comply with Chinese law wherever in the world the Chinese citizen uses WeChat. Effectively saying that Chinese law is global law for Chinese citizens.  Fourth, regulators are not always human beings. The automation of regulation poses problems. Fifth, is the problem fo identity. Many users do not use their real names.  Problem of protection of identity but also of misleading. Sixth, economic motives may trump human rights, especially speech. Companies have deep economic motives that will be privileged; can we trust them.  Seventh–private versus public space; what is the character of the discursive space over which Facebook presides; has Facebook been governmentalized in its control over that space. What is public and what is public in the cyberworld is hard to distinguish.  Eighth, social media may perpetuate discrimination, and will continue to perpetuate power imbalances. Taken together the eight points provide a good framework for understanding the scope and direction of paradigm shifting in this context.

 He offered a case study form India.  Some Muslim people in India suspected of carrying beef may be subject to social media campaign outing them for selling beef for social consequences.  People may be attacked with instances of people injured or dying as a result.  Social media used to perpetuate and disseminate a rumor with substantial effect.  State has a role to play.  But also have to consider the responsibility of the group administrator who let’s these messages post. What ought to be the internal standards applied and what mechanisms can be used to minimize risks. Plus one has to evolve community standards, NGOs are essential; should the state be involved? What role for the enterprise? What is the discursive space within which these community standards are shaped? Should not be shaped shaped purely by or primarily by economic considerations. Bottom up expectations ought to align with human rights norms and consequences.

Questions / Comments

Spoke to issue about regulating group moderators.  China under WeChat has made group moderators liable. Issues of self censorship; are there alternatives? Picking quarrels and creating troubles.  But this also a problem from the West. Canadian courts have insisted that content be removed globally as well.

What are the corporate responsibilities in this context; does technological capability drive scope of obligation? Example of problem of social media mining tool to identify at risk workers produces lots of legal risk: if people are identified does the enterprise notify the state? what happens if the enterprise fails to act? Which enterprise in a production chain? Related problem of abuse by gatekeepers. For the enterprise the issue for HRDD has be be centered on standards and monitoring that is shaped to issues of content and issues of distribution.  But Companies have distinct responsibilities as users of social media and as social media companies. The political aspects of responsibility to protect discursive space is embedded in the work of social media companies; but the responsibility of social media users.

 What about nuance especially int he context of hate speech. Re: fake news declaration subject to develop international standards. Problem of definition–contrast to satire.  When state engages in action that undermine political rights is the core; what about corporate responsibility? If companies start to take down what they think is fake news it will affect little companies me than big ones and be guided by political considerations.  Political speech versus commercial speech.

 What is relationship between state and corporate obligation?  Especially concerning the right to criticize one’s government? Two recent cases in Spain relating to Catalan independence; criticizing the King, and both prosecuted under terrorist laws (one escaped to Belgium the other is appealing). Any obligation on the part of enterprises to protect individuals and content from state. Social media companies might have both negative obligations but also positive obligations to take active measures to protect individuals and content against the state.

Re Russian interference; problematic but how great is the offense? Is it a matter of disguising source of content or is it the source of the content itself?  Approach to regulation issue: is there a way to go about it by focusing on harms it creates? Focus on hare speech, abuse of data collection, trolling and incitement? A responsive rather than practice approach.  Here the issue of extraterritoriality and the nature of projecting foreign interests abroad has to be balanced against consideration by an electorate about the effect of local policies on international relations. Is this too much a responsibility to vest on either the election process or the role of the electorate in democratic participation.