Knowledge Trees: Exploring student control over data privacy through speculative design.

Description

In this paper, I discuss a speculative design artifact that explores how students can have more fine-grained control over the release of their own learning data. Current MOOC platforms and adaptive learning systems encourage students to opt-in to releasing their learning data so the system can better offer a more personalized learning experience for them. However, students don’t always have enough information to make informed cost-benefit decisions about their privacy settings. In this paper, I discuss a system designed to give students that control and encourage them to directly reap the benefits of the data that is “harvested” from them.

Takeaway

Because of the black-box nature of data collection in eLearning systems, students are unable to make fine-grained decisions about releasing their learning data, which could be used for their educational benefit. I present here a speculative design of an adaptive learning system that allows students to release particular elements of their learning data to earn data “points” to be “spent” on personalized learning recommendations. Through a speculative design, one approach to exploring possible future scenarios, I hope to make explicit the metaphor of the educational “value” of student data, and provoke a conversation about students’ control over their data privacy.

Abstract

Higher education institutions (HEIs) often use student data collected from MOOCs, adaptive learning systems, and other eLearning tools to personalize students’ educational experiences or intervene with at-risk students. However, the panoptic nature of that data collection (often referred to in acquisitional terms like “harvesting” and “capturing”) is at odds with students’ desire for privacy and control over their own data. While HEIs have a fiduciary duty to provide the best educational experience possible, they must not abuse their asymmetrical power over students by collecting students’ data without their knowledge or informed consent.

Despite the 83 student data privacy bills introduced in 32 state legislatures in 2014 alone, current data privacy laws leave decisions about “quasi-identifiable data” (or, anonymized data that can be used to reveal personal information) in the hands of individual institutions. The students themselves, meanwhile, have little or no control over the release of their information to third-party companies, researchers, employers, or other universities. Even when students have control over the release of their data, consent is given or rescinded by opting entirely in or out of allowing their data to be collected and used in learning analytics, often without a clear understanding of the costs and benefits of these decisions. If students, however, can be thought of as active collaborators in the process of “harvesting” and using their own learning data, we have the potential to think about a different path than the choice of using or foregoing an educational tool.

This paper describes a speculative design artifact, entitled “Knowledge Trees”, built to generate a conversation about how students are incentivized to release their data. It examines how they are “nudged” to release ever greater amounts of their personal learning data for their benefit and for the benefit of their peers. Unlike engineering design, speculative design does not take as its goal the design of a fully operational system, but instead aims to explore possible future scenarios and their consequences through the foregrounding of the socio-political implications of technology design through prototyping.

Knowledge Trees is an adaptive learning system that gives students fine-grained control over the release of their learning data so that it can be used to provide personalized recommendations and content for them. Although many advocates of learning analytics argue for its metaphorical educational value, in this system, student data is treated as a transactional entity with quantified value for the students. They earn points by releasing information about their learning behaviors and interactions with the system, and can spend those points on personalized recommendations about their next learning objective. Such fine-grained control allows students to make more informed decisions about the release of their data and provides a middle ground between the binary of the opt-in or -out approach.

In this paper, I present Knowledge Trees, a speculative design artifact that is an instantiation of the world we may be heading towards, to stimulate a critical reflection on whether the educational values of learner agency and control still have room in such a world.