Data Privacy & Policy
"A Survey on Metaverse: Fundamentals, Security, and Privacy"
Media type: Research article (IEEE)
Author: Yuntao Wang; Zhou Su; Ning Zhang; Rui Xing; Dongxiao Liu; Tom H. Luan; Xuemin Shen
Summary: The article delves into the technical components and infrastructure required to build a metaverse, including network architecture, communication protocols, and distributed systems. It emphasizes the importance of interoperability and standardization to enable seamless interaction across different metaverse platforms.
Security and privacy challenges in the metaverse are addressed in depth. The authors identify potential threats such as identity theft, data breaches, and malicious activities. They discuss security measures such as access control, encryption, and authentication to safeguard user data and prevent unauthorized access.
Privacy concerns in the metaverse are also explored, focusing on issues such as data collection, user tracking, and information disclosure. The article examines privacy-enhancing technologies like anonymization, pseudonymization, and privacy-preserving algorithms that can be employed to protect users' personal information.
The survey further highlights the legal and regulatory aspects of the metaverse, discussing intellectual property rights, content moderation, and user responsibilities. It acknowledges the need for clear guidelines and policies to address legal and ethical concerns in this evolving virtual environment.
Key takeaways:
The article mentions the importance of standardized protocols that enable interoperability between different metaverse platforms. These protocols allow users from various virtual worlds to communicate, collaborate, and share resources seamlessly. So the user believes that they are consenting to share data with only one platform, but the data is actually distributed along the web using the same consent.
There is a high need for open standards and protocols to enable cross-platform compatibility, allowing users to seamlessly transition between different metaverse environments without facing barriers or limitations.
Though the metaverse is intended to be 'immersive', the basic architecture of it makes it 'intrusive', especially when users are unaware of the extent of data collection
The complex and interconnected nature of virtual environments can make it challenging for users to understand and manage how their information is being collected, used, and shared. This lack of control and transparency can contribute to a perception of intrusiveness. For instance, virtual behavior would be very different if the user is alert and aware that all their actions are being monitored, but if data collection is concealed as "necessary movement" (like purposeful actions in games), then a user would be more likely to explore since they would want to match what is expected out of them.
A psychological study that explored the effects of "being watched" on inhibitory control suggested that knowing that someone was watching them made it easier for them to control their impulses, even when they saw something that would normally make it harder. Interestingly, there was no change in the participants' response time after making a mistake, so it wasn't just that they were trying to be more careful. These results show that the combination of strong emotions and impulse control depends on how self-conscious we feel. It also suggests that even the presence of a webcam, as a symbol of being watched, can affect how well we perform on tasks that require self-control. Compare this to XR. If a user is aware of "being watched", their natural reaction would be to be more self-conscious, have higher control over their logical/analytical mind, and have a greater tendency to suppress emotional impulses. So data collected about users may not necessarily suggest what a user 'wants' to do, but what 'he is doing because he is expected to do that'.
This gets an ethical question of power - who gets to decide what a user should be doing in XR? Who decides the purpose of their actions? Does it not put the owner of the XR space in a higher power position than the user?
Citation: Wang, Y., Su, Z., Zhang, N., Xing, R., Liu, D., Luan, T. H., & Shen, X. (2023). A survey on Metaverse: Fundamentals, security, and privacy. IEEE Communications Surveys & Tutorials, 25(1), 319-352. https://doi.org/10.1109/comst.2022.3202047
"Four ethical priorities for neurotechnologies and AI"
Media type: Journal article
Authors: [Journal: nature]
Rafael Yuste, Sara Goering, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, Phoebe Friesen, Jack Gallant, Jane E. Huggins, Judy Illes, Philipp Kellmeyer, Eran Klein, Adam Marblestone, Christine Mitchell, Erik Parens, Michelle Pham, Alan Rubel, Norihiro Sadato, Laura Specker Sullivan, Mina Teicher, David Wasserman, Anna Wexler, Meredith Whittaker & Jonathan Wolpaw
Summary: The article emphasizes key ethical considerations in the development and use of neurotechnologies and artificial intelligence (AI). The four priorities outlined are privacy, agency and identity, bias and fairness, and ethical responsibility. The authors highlight the need for protecting individuals' neural data privacy, preserving human agency and personal identity, addressing bias and promoting fairness, and integrating ethical responsibility into the development and deployment of these technologies. By addressing these priorities, they aim to ensure responsible innovation and mitigate potential risks associated with neurotechnologies and AI.
Key takeaways:
People could end up behaving in ways that they struggle to claim as their own, if machine learning and brain-interfacing devices enable faster translation between an intention and an action, perhaps by using an 'auto-complete' or 'auto-correct' function. If people can control devices through their thoughts across great distances, or if several brains are wired to work collaboratively, our understanding of who we are and where we are acting will be disrupted.
One of the biggest concerns in brain-machine interactions is that machines can translate signals of the brain but not signals of the mind. For instance, inhibitory control is more of a 'mind-game' and heavily depends on the person's understanding of 'right' and 'wrong' in the society that they are a part of. In the case of a machine translating this, the impulse would be picked up by the machine and translated into an action as soon as it is picked up, not giving the human the time, choice or authority to control every impulse of theirs.
Illustrating this prediction from the article: --- who is to blame? the person, the machine or the interaction between the person and the machine?
"Consider the following scenario. A paralysed man participates in a clinical trial of a brain–computer interface (BCI). A computer connected to a chip in his brain is trained to interpret the neural activity resulting from his mental rehearsals of an action. The computer generates commands that move a robotic arm. One day, the man feels frustrated with the experimental team. Later, his robotic hand crushes a cup after taking it from one of the research assistants, and hurts the assistant. Apologizing for what he says must have been a malfunction of the device, he wonders whether his frustration with the team played a part. This scenario is hypothetical. But it illustrates some of the challenges that society might be heading towards."
The article suggests that neural data should be treated similarly to organs or tissues, where explicit consent is required to share the data. Regulations should be implemented to strictly control the sale, commercial transfer, and use of neural data. Safeguards such as differential privacy, federated learning, blockchain-based techniques, and open-data formats can be employed to protect user privacy and ensure transparency. These measures aim to address privacy concerns and prevent unauthorized use of neural data.
A 2016 study highlights the case of a person using brain stimulation for depression treatment experienced uncertainty about their actions and questioned their own identity. As neurotechnologies advance, they could blur the line between an individual's intentions and their actions, potentially affecting personal responsibility. The article suggests that individual identity and agency should be protected as basic human rights, proposing the inclusion of "neurorights" in international treaties. The creation of an international convention and a United Nations working group is recommended to address prohibited actions related to neurotechnology. Additionally, it emphasizes the importance of educating individuals about the cognitive and emotional effects of these technologies.
The article also discusses the possibility of changing societal norms, issues of equitable access, and new forms of discrimination arising from the pressure to adopt these technologies. There is a concern about an augmentation arms race, particularly in military settings, where enhanced mental abilities could be used. The authors recommend the establishment of international and national guidelines to set limits on the implementation of augmenting neurotechnologies and define their appropriate contexts. Culture-specific regulatory decisions should be made while respecting universal rights. The article suggests drawing on precedents of international consensus and public opinion incorporation in scientific decision-making, such as treaties on chemical and biological weapons and the establishment of commMattissions for atomic energy. Strict regulation of neural technology for military purposes is also proposed, preferably through a global moratorium led by the United Nations.
Citation: Yuste, R., Goering, S., Arcas, B. et al. Four ethical priorities for neurotechnologies and AI. Nature 551, 159–163 (2017). https://doi.org/10.1038/551159a
"The Future Is Now: Wrestling with Ethics, Policy and Brain-Computer Interfaces"
Media type: Blog article
Author: Matt Shipman (North Carolina State University)
Summary: The article discusses a new book called "Policy, Identity, and Neurotechnology: The Neuroethics of Brain-Computer Interfaces" that explores the ethical and policy issues surrounding brain-computer interfaces (BCIs). BCIs are technologies that can read and translate brain activity into computer-readable formats. The book examines the ethical questions raised by BCIs, such as user safety, changes in personal identity, and social implications. It also considers the policy challenges associated with regulating BCI technologies and offers recommendations for the future. The authors emphasize the need for ongoing awareness of the rapid advancements in BCI technology and the potential societal impact of widespread adoption.
Key takeaways:
Technological advances in healthcare is resulting in a surrender to technology instead of dependence on technology. In the case of dependence, power is equally distributed in the relationship. In case of surrender, all the power lies at the mercy of the technological equipment, and ultimately the owner or controller of the data collected.
Citation: Shipman, M. (2023, April 28). The future is now: Wrestling with ethics, policy and brain-computer interfaces. NC State News. Retrieved June 28, 2023, from https://news.ncsu.edu/2023/04/ethics-brain-computer-interfaces/
"When “I” becomes “We”: ethical implications of emerging brain-to-brain interfacing technologies"
Media type: Journal article
Summary: The article "When 'I' becomes 'We': ethical implications of emerging brain-to-brain interfacing technologies" explores the ethical considerations surrounding the development and use of brain-to-brain interfacing technologies. These technologies enable direct communication and information transfer between individuals' brains, potentially blurring the boundaries of individual identity and agency. The article highlights several key ethical concerns related to privacy, consent, autonomy, and potential societal impacts.
Key takeaways:
One major ethical concern is the preservation of privacy and mental integrity. Brain-to-brain interfacing could allow access to an individual's private thoughts, emotions, and memories, raising concerns about unauthorized access, manipulation, or misuse of personal information. The article emphasizes the importance of ensuring robust security measures and strict consent protocols to protect individuals' privacy and mental autonomy.
Consent and agency are also crucial issues. The ability to interface brains raises questions about whether individuals can truly give informed consent to participate in such interactions. There is a need to establish clear guidelines and frameworks to ensure that consent is freely given and that individuals maintain control over their own cognitive processes.
Additionally, the blurring of individual boundaries in brain-to-brain interfacing raises concerns about personal autonomy and the potential for coercion. The article underscores the importance of preserving individual agency and preventing situations where one person's thoughts or actions could be controlled or manipulated by another through brain interfaces.
Societal implications are another area of concern. The widespread adoption of brain-to-brain interfacing could lead to significant social changes and inequalities. The technology may exacerbate existing disparities, with potential implications for education, employment, and communication. It is essential to address these potential impacts and ensure equitable access and distribution of the technology to prevent further marginalization.
Citation: Trimper, J. B., Wolpe, P. R., & Rommelfanger, K. S. (2014). When “I” becomes “We”: Ethical implications of emerging brain-to-brain interfacing technologies. Frontiers in Neuroengineering, 7. https://doi.org/10.3389/fneng.2014.00004
Last updated