THEME: Networked Influence and Virality — REVISITED
Join us on July 18–20, 2018 for the 9th annual International Conference on Social Media and Society (#SMSociety). The conference is an interdisciplinary gathering of social media researchers, practitioners, and analysts from around the world. The 2018 conference is hosted by the Centre for Business Data Analytics at the Copenhagen Business School.
Anabel Quan-Haase, Western University, Canada – Panel Chair
Luke Sloan, Cardiff University, UK – Panel Chair
Special Issue of Social Science Computer Review (SSCR)
Wenhong Chen, University of Texas at Austin
Anabel Quan-Haase, Western University
Aim and Scope
Big Data are dramatically changing many aspects of social life, including political elections, public discourse, business, public health, and journalistic practices. Big Data has gained new meaning and is no longer restricted to digitally collected information, rather it encompasses any and all information collected, stored, linked, and analyzed either online or offline. Accordingly, scholars from multiple disciplines are increasingly interested in investigating the ethics and politics of Big Data. Big Data and their meaning are socially constructed, and influenced by evolving social, political, and technological forces. The Arab Springs of 2010 uniquely demonstrated the political side of Big Data and the role social media engagement came to play in mobilizing societal groups. The 2016 US presidential election further raises questions around the use of Big Data for political purposes and the ethics of harnessing the power of the Big Data for political purposes. Addressing data ethics and politics is an integral part for Big Data studies. The study of Big Data ethics requires new understandings, as big data has a unique set of features and parameters. The complexity ranges from data sampling to informed consent to data analytics. New ethical dimensions and questions are surfacing as more scholars engage in Big Data projects. Big Data have challenges such as objectivity, accuracy, veracity, and inclusiveness. Bigger does not always mean better, accessible does not always mean ethical, and convenience does not mean efficient. It is important to understand and create awareness of the biases and limitations inherent in Big Data studies, especially when its predictive power is taken for granted. This special issue calls for theoretically-grounded, empirically-sound, original work to advance a balanced and context-rich understanding of Big Data ethics and politics, especially epistemological and methodological vantage points are welcome that help to make visible Big Data contradictions, gaps, and omissions. It encourages work with diverse theoretical and empirical approaches that shed light on ethical and political concerns, consequences, and contingencies of various aspects of Big Data studies.
Topics of interest include (but are not limited to):
What are the historical developments and current socio-political trends that influence and structure Big Data production, distribution, and application?
What power relations influence, structure, and play out in Big Data?
What are the socio-technical processes underlying data collection and storage? From whom is data collected and by whom is data used?
How are class, gender, and processes of racialization presented and represented in and through Big Data?
What are the global patterns and local variations of Big Data practices, ethics and politics, especially when comparing liberal democracies and authoritarian regimes, the global north and the global south? Given the uneven access and use of Big Data methods and analytics, is there a data imperialism or data nationalism?
How are Big Data being shaped by and how are they shaping ethical and political debates? For instance, how do social, economic, and political factors affect how governments and businesses collect and exchange Big Data? How are Big Data practices facilitated and constrained by notions of digital capitalism?
How do Big Data and methods reproduce and repurpose institutional logics and organizational meaning? Do Big Data contribute to or hinder an informed, connected, and engaged public? How do actors align or resist privacy erosion and the prevalence of ubiquitous, invisible algorithms? How do people and organizations use data to navigate and negotiate their social and geographic landscapes? What are the incentives and disincentives for individual and institutional actors to “game the system” and engage in data activism? Can Big Data facilitate access and mobilization of resources for social change?
How can Big Data and existing social science theories and methods be mutually beneficial to one another? How can Big Data better address issues such as construct validity, reliability, replicability, and temporal confounds? How can comparative or cross-platform approaches capture the divides, dilemmas, and dividends of Big Data? What types of data access and ethical guidelines can be developed for researchers and practitioners?
May 31, 2017: Send a 500-word abstract to email@example.com
June 30, 2017: Decisions on abstracts
November 30, 2017: Send full papers to firstname.lastname@example.org
January 30, 2018: Reviews returned to authors with publication decisions
February 30, 2018: Final and revised papers are due
All submissions will be subject to the journal’s standard peer review process. Authors should follow the Information for Contributors of Manuscripts as published on SSCR website. Information about formatting guideline can be found athttps://faculty.chass.ncsu.edu/garson/SSCORE/library.htm
Expected publication date is June 2018
For inquiries, please contact:
Assistant Professor of Media Studies and Sociology
Moody College of Communication
University of Texas at Austin
Faculty of Information and Media Studies /Department of Sociology
The panel will provide an overview of critical themes to be covered in the Sage Handbook of Social Media Research Methods to be published in 2016.The Handbook is the first book to cover not only the entire research process in social media research from question formulation to the interpretation of research findings, but also to include specific chapters and examples on how data collection and analysis takes place on specific social media platforms such as Twitter and Instagram. Our panel will focus on a critical theme that weaves through the entire handbook, namely the tensions and controversies that have emerged around two fundamental different approaches toward the study of social media: big data vs. small data. Three central themes will be explored in an interactive format that includes a life poll and feedback from the audience: (1) the contributions to scholarship that big data and small data make and the contexts in which each approach is appropriate; (2) the tension between big data analytics and small data; (3) approaches on how to combine and integrate both approaches and how they can potentially inform each other.
The main objective of the present panel is to discuss key challenges in the study of social media, specifically looking at methodological issues. At the core of the discussion will be the tension between large-scale quantitative and small-scale qualitative approaches. These approaches are often perceived to be at opposite ends of the spectrum. Our approache consists of discussing first what the contributions are that each perspective makes to our understanding of social media phenomena. We explore the following two questions: What is the contribution of large-scale quantitative approaches? What is the contribution of small-scale qualitative approaches? Members from the panel will discuss the strengths of each approach and specifically draw from their own research experiences to demonstrate how they have employed either a big data or small data approach. The aim of the discussion will be to get a better sense of the context in which each approach is appropriate and the kinds of insights it allows scholars to make. We will then explore the tension that exists between big data and small data approaches and also provide an overview of cutting-edge methodological innovations that help bridge the divide. The two questions of interest are: Where does the tension come from between large-scale quantitative and small-scale qualitative approaches? How can small data inform big data and vice versa?
Two key questions will guide the panel discussion and ask for audience members’ opinions as well:
What is the contribution of large-scale quantitative approaches? What is the contribution of small-scale qualitative approaches?
Names of panelists and their perspectives:
Claudine Bonneau and Mélanie Millette
Will discuss the contribution of small-scale qualitative approaches.Drawing on a recent case study in social media research, they will propose various “data thickening methods” to illustrate qualitative strategies whose scope crosses the boundaries between “small” and “big” datasets, and between “trace data” and data of other origins. They show that data thickening is essentially a relational process: it happens when links are created with other data that operate as metadata. In the process of thickening trace data, the thickening occurs not only on the side of digital traces, but it also happens with other qualitative data collected along the way.
Will discuss the unique contribution of small-scale qualitative approaches based on visual data and emphasizes visual culture methodologies to conceptualize visuality in social media as comprising of three broad elements, each of which has to be successfully negotiated in methodological terms. First, images themselves can take many forms, and require careful attention to the established approaches in visual studies while acknowledging the specific qualities of the digital. Second, the circulation of visual data in social media destabilizes the objects of study in ways that challenge visual analysis concerned with locating textual meanings. Third, while the visualization of social practices through social media appears to offer unprecedented access to social life, the detail of such practices often remains obscure if we focus solely on images. We need to ask how visual objects are generated and used, and how people make sense of the visual in using social media. Pulling these three dimensions apart and then together is difficult. This, I suggest, is the current predicament of visual studies of social media.
He addresses an academic research gap and real-world industry need to describe, model, analyse and explain large-scale interactions on organisations’ social media channels as individuals’ associations to ideas, values, identities etc. Towards this end, we are developing and evaluating a set-theoretical approach to big data analytics termed “Social Set Analysis” (SSA). Social Set Analysis consists of three primary research activities: (a) theorising, modelling, and collecting big social data about organisations (e.g., Danish Cancer Society’s official Facebook page); (b) combining those big social data sets with in-house organisational data sets (e.g., Customer Relationship Management systems); and finally (c) analysing the combined datasets by applying set theoretical methods and tools (crisp sets, fuzzy sets, rough sets, random sets and Bayesian sets). This talk will outline the SSA approach, report selected empirical findings, discuss implications and limitations, identify challenges and future research directions.
Diane Rasmussen Pennington
She will argue that quantitative analysis of big data carries risks with it, but that it can provide value if used appropriately. The misuse of big data presents risks to individuals’ privacy and online security, especially since the process of anonymizing consumer data does not always succeed (Williams & Rasmussen Neal, 2012; Mayer-Schöenberger & Cukier, 2013). The patterns that emerge from quantitative big data analysis do not tell researchers much, if anything, about individuals’ preferences and differences. Additionally, the data in “big data” is frequently incorrect, as can be observed anecdotally on credit reports or “people search” websites such as intelius.com. Qualitative analysis of big data presents the issue of not allowing for the employment of larger sample sizes or finding those potentially useful trends mentioned previously, although it can provide deeper insight into behaviours or preferences of fewer individuals (Rasmussen Pennington, in press). Both approaches have something important to offer big data and small data; how can they be used together in a mixed methods type of approach in order to achieve the most meaningful, accurate, representative results possible?
Where does the tension come from between large-scale quantitative and small-scale qualitative approaches?
Names of panellists and perspectives:
The algorithmic processing of very large sets of “traces” of user activities collected by digital platforms—so-called “Big Data”—exerts a strong appeal on social media researchers. In the context of a computational turn in social sciences and humanities is qualitative research based on small samples and corpuses (“Small Data”) still relevant? It is argued that the unique value of such research lies in data thickness. This is achieved through a process we call thickening. Drawing on recent case studies in social media research I have conducted, I propose and illustrate three strategies to thicken trace data: trace interview, manual data collection and flexible long-term online observation.
Provides an overview of the challenges and opportunities, but also a discussion of the term data and of the nature of social media data. The methods overview, in combination with the introduction and discussion of methods used in other disciplines and in commercial market research, aims to provide a practical and applied guideline for social media research. An applied case study at the end of this chapter describes a novel approach to the analysis of multimodal, large data sets in online communication environments, using a mixed method design.
Big Data and Political Science. Increasingly, public media is dominated by discussions of the utility of social media-sourced data for use in policymaking, surveillance, and marketing. Academic lag on the subject, however, remains endemic, making its utility in social science research seem elusive for those first broaching it. The panel discussion will explore the utility of social media data as indicative evidence of external social, economic and political relations, trends, and events.
Social media encompass a wide array of platforms ranging from popular sites such as Facebook and Sina Weibo to sites geared to niche communities such as Academia, Pinterest, and Ello. While social media share common features that afford engagement through ‘two-way’ audience interaction, the diversity in design encountered across sites makes it difficult to identify a set of core functionalities. In this discussion, I focus on user engagement in the context of social media at the level of the individual and network experience – i.e., the experiences that motivate users to engage with content created, shared, or endorsed by people in their social networks and encourage them to linger and return. Understanding social media engagement is valuable and its understanding bridges big data analytics and small-scale research.
Panelists will discuss their positions and ideas on the relevance and importance of both qualitative and quantitative methods in social media research. Some approaches, such as content analysis, can be performed either qualitatively or quantitatively, while others require an exclusion of one approach or the other (Rasmussen Pennington, in review). For example, discourse analysis is entirely qualitative, which presents potential challenges such as a relatively small dataset (Neal, 2010). Automated approaches, such as sentiment analysis, do not necessarily allow for human intervention with the data, and therefore inflections and other subtleties in the sample may not be captured adequately (Thelwall & Buckley, 2013).
The panel addresses several conference themes. The various subtopics under the Theories & Methods category, including Qualitative Approaches, Quantitative Approaches, and Theoretical Models, are represented by the panellists as evidenced by their biographies. Additionally, Big and Small Data (under the Social Media & Big Data category) will be discussed, since data sampling is an issue of interest to researchers who work in the various approaches to social media data collection and analysis.
Overview of session:
Who is involved?
Introduction to the panel
Editors: AnabelQuan-Haase, Luke Sloan
Question 1 explored
Panelists discuss question 1: Martin Hand, Claudine Bonneau, Mélanie Millette, Ravi Vatrapu, Diane Rasmussen Pennington
Poll results or small groups discussion by all panelists Moderators: Anabel Quan-Haase and Diane Rasmussen Pennington
End of workshop and dinner
Attendees will be able to engage in the session in three forms: (1) asking panelists questions and discussing with them three central questions; (2) providing input via a live poll during the session; and (3) through a wrap-up interactive session lasting 20 minutes at the end of the panel. In the final wrap-up session, attendees will be able to reflect upon the results from the live poll as well as provide their opinions of the results.
Brief Biography of Each Presenter:
Claudine Bonneau, Université du Québec à Montréal
Claudine Bonneau is Associate Professor of Management & Technology at Université du Québec à Montréal (UQAM), where she is a member of the Laboratory on Computer-Mediated Communication (LabCMO) and teaches in graduate and undergraduate programs in Information Technology. Her current work focuses on social media uses and online collaboration practices at work. She is also interested in methodological issues related to qualitative research and online ethnography. Besides her contributions to edited books (such as the Handbook of Social Media Research Methods, Sage, 2016), her work has been published in the International Journal of Project Management (2014), tic&société (2013) and other French-language publications.
Anatoliy Gruzd, Ryerson University
Dr. Gruzd is a Canada Research Chair in Social Media Data Stewardship, Associate Professor at the Ted Rogers School of Management at Ryerson University (Canada), and Director of the Social Media Lab. He is also a co-editor of a multidisciplinary journal on Big Data and Society published by Sage. His research initiatives explore how the advent of social media and the growing availability of user-generated big data are changing the ways in which people communicate, collaborate and disseminate information and how these changes impact the social, economic and political norms and structures of modern society.
Martin Hand, Queen’s University
Martin Hand is an Associate Professor in Sociology at Queen’s University, Kingston, Canada. He is the co-editor of Big Data? Qualitative Approaches to Digital Research (2014; Emerald), author of Ubiquitous Photography (2012; Polity), Making Digital Cultures (2008; Ashgate) and co-author of The Design of Everyday Life (2007; Berg), plus articles and essays about visual culture, technology, and consumption. He is currently conducting research on technology, time and temporality in contemporary Canadian society, funded by the Social Sciences and Humanities Research Council of Canada.
Guillaume Latzko-Toth, Université Laval
Guillaume Latzko-Toth is Associate Professor in the Department of Information and Communication at Université Laval (Quebec City, Canada) and codirector of the Laboratory on Computer-Mediated Communication (LabCMO, http://www.labcmo.ca). Rooted in a Science and Technology Studies (STS) perspective, his research and publications address the role of users in the development of digital media, the transformations of publics and publicness, and methodological and ethical issues related to Internet research. Besides several contributions to edited books, his work appeared in the Journal of Community Informatics (2006), the Bulletin of Science, Technology and Society (2010), tic&société (2013) and the Canadian Journal of Communication (2014).
Mélanie Millette, Université du Québec à Montréal
Mélanie Millette is Professeure substitut at the Département de communication sociale et publique, UQAM, Canada. She is a member of the Laboratory on Computer-Mediated Communication (LabCMO, UQAM and Université Laval). Her work concerns social, political, and cultural aspects of social media uses, more specifically how citizens mobilize online platforms to achieve political participation. She won a SSHRC-Armand-Bombardier grant and a Trudeau Foundation scholarship for her thesis research which examines media visibility options offered by online channels such as Twitter for francophone minorities in Canada. She is the co-editor of a book on social media (Médias sociaux : enjeux pour la communication, PUQ, 2012) and is a contributor to many edited books (such as the Handbook of Social Media Research Methods, Sage, 2016, and Hashtag Publics: The Power and Politics of Discursive Networks, Peter Lang, 2015).
Anabel Quan-Haase, The University of Western Ontario
Anabel Quan-Haase is Associate Professor of Information and Media Studies and Sociology at Western University. Her research interests include digital scholarship, networked work, serendipity in work practices, serendipity in social media, and the design of discovery systems that promote serendipity. She is the author of “Technology and Society: Social Networks, Inequality and Power” (Oxford University Press, 2015) and co-editor of the Handbook of Social Media Research Methods (Sage, 2016). She is the past president of the Canadian Association of Information Science and current Council Member of the Communication, Information Technology, and Media Sociology section of the American Sociological Association. She has organized several conferences including the Canadian Association of Information Science Annual Meeting and has served on numerous programme committees.
Diane Rasmussen Pennington, University of Strathclyde
Dr. Diane Rasmussen Pennington is a Lecturer in Information Science in the Department of Computer and Information Sciences at the University of Strathclyde in Glasgow, Scotland, where she is a member of the iLab and the Digital Health and Wellness research groups. She is also the Social Media Manager of the Association for Information Science & Technology (ASIS&T). Dr Rasmussen Pennington has taught classes on research methods, social media, knowledge organisation, and a range of information technology topics. Her diverse research areas encompass non-text information indexing and retrieval, Emotional Information Retrieval (EmIR), user behaviours on social media, and online health information preferences. She is the editor of Indexing and Retrieval of Non-Text Information (2012) and Social Media for Academics: A Practical Guide (2012). She is currently editing a book series entitled Computing for Information Professionals.
Luke Sloan, Cardiff University
Luke Sloan is a Senior Lecturer in Quantitative Methods and Deputy Director of the Social Data Science Lab at the School of Social Sciences, Cardiff University UK. Luke has worked on a range of projects investigating the use of Twitter data for understanding social phenomena covering topics such as election prediction, tracking (mis)information propagation during food scares and ‘crime-sensing’. His research focuses on the development of demographic proxies for Twitter data to further understand who uses the platform and increase the utility of such data for the social sciences. He sits as an expert member on the Social Media Analytics Review and Information Group (SMARIG) which brings together academics and government agencies and he works closely with the Office for National Statistics and Food Standards Agency.
Ravi Vatrapu, Copenhagen Business School
Ravi Vatrapu is a professor of human computer interaction at the Department of IT Management, Copenhagen Business School; professor of applied computing at the Westerdals Oslo School of Arts Communication and Technology; and director of the Computational Social Science Laboratory (http://cssl.cbs.dk). Prof. Vatrapu’s current research focus is on big social data analytics. Based on the enactive approach to the philosophy of mind and phenomenological approach to sociology and the mathematics of classical, fuzzy and rough set theories, his current research program seeks to design, develop and evaluate a new holistic approach to computational social science, Social Set Analytics (SSA). SSA consists of novel formal models, predictive methods and visual analytics tools for big social data. Prof. Vatrapu holds a Doctor of Philosophy (PhD) degree in Communication and Information Sciences from the University of Hawaii at Manoa, a Master of Science (M.Sc) in Computer Science and Applications from Virginia Tech, and a Bachelor of Technology in Computer Science and Systems Engineering from Andhra University.
Frauke Zeller, Ryerson University
Frauke Zeller is Assistant Professor in the School of Professional Communication at Ryerson University in Toronto (ON), Canada. Her research interests include organizational communication, Human-Computer Interaction/Human-Robot Interaction, digital communication, and method development for digital research analyses. She has been awarded with a range of major research grants, among them a Marie Curie Fellowship (2011-2013), which is one of Europe’s most distinguished individual research grants. It enabled her to conduct research on big data and multimodal communication analyses tools. She is the co-creator of hitchBOT, Canada’s first hitchhiking robot, and has also been involved in a range of art works and social-scientific experiments relating to robotics and AI. She is co-editor of “Revitalising audience research: Innovations in European audience research” (Routledge 2015).
boyd, D. and Crawford, K. (2012). Critical questions for big data: provocations for a cultural, technological and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
Duggan, M., & Smith, A. (2014). Social media update 2013: 42% of online adults use multiple social networking sites, but Facebook remains the platform of choice (Online). Pew Research Internet Project. Retrieved January 14, 2015, from http://www.pewinternet.org/
Grabau, M. & Hegelich, S. (2016). The Gas Game: Simulating Decision-Making in the European Union’s External Natural Gas Policy, accepted for publication at: Swiss Political Science Review (SPSR).
Hegelich, S., Fraune, C. & Knollmann, D. (2015). Point predictions and the punctuated equilibrium theory: A data mining approach. Policy Studies Journal (PSJ), 43(2), 228-256.
Hegelich, S. & Shahrezaye, M. (2015) The communication behavior of German MPs on Twitter: Preaching to the converted and attacking opponents. European Policy Analysis (EPA), 1(2), 155-174.
Hogan, B., & Quan-Haase, a. (2010). Persistence and Change in Social Media. Bulletin of Science, Technology & Society, 30(5), 309–315.
Mayer- Schöenberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. New York: Houghton Mifflin Harcourt.
Neal, D. M. (2010). Emotion-based tags in photographic documents: The interplay of text, image, and social influence. Canadian Journal of Information and Library Science, 34(3), 329-353.
Thelwall, M., & Buckley, K. (2013). Topic-based sentiment analysis for the Social Web: The role of mood and issue-related words. Journal of the American Society for Information Science and Technology, 64(8), 1608–1617.
Williams, L., & Rasmussen Neal, D. (2012). The digital aggregated self: A literature review. Paper presented at the IEEE International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), Sanya, China.
“It’s hard to put a pin on a moving target.” We’ve all likely heard this phrase before, but, aside from its military connotation, the sentiment itself was mostly lost on me. That is until I began framing a research project that aims to do exactly that—pin a moving target.
Over the past year and a half Dr. Anabel Quan-Haase, MA student Alyssa MacDougall, and myself, PhD student Chandell Gosse, developed a project that looks at social and political online campaigns. These campaigns have become a notable way to rally support and raise awareness for important social and political causes. While the name itself reveals a few key details, you might still be left wondering, what exactly are social and political online campaigns? Well, that’s the problem. That’s the moving target.
Let’s go back for a minute—when Anabel, Alyssa, and I envisioned the project and articulated the questions of most interest to us, things seemed relatively uncomplicated. At the time, the ALS ice bucket challenge had recently dominated social media, Kony 2012 was still a sore spot for many people, and accusations of “slacktivism” and “clicktivism” were scrawled across news headlines. Our primary research question seemed simple enough: do people who participate in online campaigns like these also participate offline, away from social media?
Of course, the first step in creating this project was to determine what we mean when we say “social and political online campaigns.” This isn’t rocket science: most people intuitively know what kinds of campaigns I am referring to. However, “you’ll know it when you see it” doesn’t constitute a definition and definitely doesn’t leave much room for measurement. This task forced us to attempt to discern the common features of social and political viral campaigns. Which features distinguished them from their offline counterparts? This was more interesting and more difficult than any of us had anticipated.
Spirited debates between Alyssa and I ensued. For example, my gut reaction was to exclude a campaign like Bell Let’s Talk* because I thought that “participating” in this particular campaign does not require any action outside a persons usual, everyday activity (e.g., texting). However, by making this statement, as Alyssa kindly rebutted, I was suggesting that sharing, tweeting, liking, or posting—all major components of social and political online campaigns—were not a part of peoples everyday activity. Of course, this is not true! The role of social media in the lives of users is ubiquitous. In fact, these campaigns capitalize on exactly this concept: their (arguable) effectiveness stems from their ability to insert themselves into the everyday activity of the user. This is to say that sharing, tweeting, liking, and posting are par for the course, and so determining which actions are taken outside of one’s usual, everyday activity is more complex with online campaigns than it is for offline campaigns. For example, we cannot say that posting or retweeting a particular message is outside of someone’s everyday activity within the context of social media users, however it is much easier to suggest that attending a protest, donating money, or boycotting a product is generally outside one’s everyday activity.
Eventually we landed on four broad guidelines:
The first guideline was very simple: social and political online campaigns rely on social media to insert their message into popular media and culture. This means that the primary form of message dissemination is through social networking sites such as Facebook and Twitter.
Second, and perhaps most crucially, they receive a lot of attention (usually in the form of shares, likes, posts, tweets and re-tweets, for example, the ALS Ice Bucket challenge produced over 2.5 million videos on Facebook alone).
Third, they tend to have a short shelf life, even though they may be reoccurring and ongoing. This isn’t to say that campaigns don’t maintain followers, but rather that that general trend so far has been that once campaigns go viral they wane from the social media spotlight and the attention focused on said campaign decreases significantly. This feature was specifically important because of our interest in the term slacktivism.
And lastly, they focus on a single, very specific issue. This distinguishes them from elements associated with older and more traditional social movements, whereby people rally around common ideologies or political parties. Many of the social and political online campaigns break boundaries by resonating with people from each end of the political scale.
After we outlined the “definition” of a social/political viral campaign, we needed to decide how to work them into our survey. Part of this task involved deciding which campaign(s) our survey would focus on, which once again led to a spirited debate. We understood that the problem with focusing on one single campaign is the ephemerality of viral events. In other words, by the time the survey was created and data collection began the campaign would very likely be dead, over with, buried in the back yard next to Fido and Fluffy. To try and work around this problem, we decided on a list of campaigns that were very popular but also allowed participants to input the name of a campaign that was not on the list but which they had participated in—this enabled participants to customize their individual survey experience and reflect on a campaign that was recent or emerging.
This decision also bound us theoretically—by allowing for people to input their own campaign it was possible (though highly unlikely) that no two participants would comment on the same campaign. With this possibility in mind, the focus of our research became decidedly user-oriented and not about the campaigns themselves. This was fitting, given that our overarching questions concerns whether social media users who participate in online campaigns also participate in that campaign outside of social media. By allowing participants to optimize their survey we ran the risk of having lots of data about users, but little data about whether particular campaigns garnered more or less support outside of social media. This was all right, however, because although this is an interesting area to explore it is outside the parameters of this particular project.
If you want to learn more about this project you can read our letter of intent here. It’s important to remember, our four guidelines are not to be mistaken for defining features. To reiterate, the aim was to structure definitions in a way that allows core components of social and political online campaigns to be recognizable and to give shape to our project. In other words, the four guidelines allow social and political online campaigns, which are relatively amorphous events, to develop and depart from one another while remaining relevant to our project.
Please consider submitting your work to the 2016 CITAMS Student Paper Award. I would like to encourage faculty to nominate their students for the award. We also welcome self-nominations from students. See details below. We look forward to reading all nominations, Anabel, Casey and David
2016 CITAMS Student Paper Award
Recognizes 1) a published or unpublished article/paper/book chapter contributing to sociology of communication, media, and/or information technology OR 2) the design or use of a communication, media or information technology that provides an exceptional contribution to the sociology of communications, media, and/or information technology. Regarding authorship, books, chapters, articles, papers and computing applications may have multiple authors. In the case of student-faculty collaborations, the student must be the lead or senior author. The award is open to students in other disciplines than sociology; authors need not have a degree in sociology or be in a sociology department to be considered for an award but award nominees must be current CITAMS to be considered for this award. Graduate students can request a free membership to CITAMS as long as they are current members of ASA. Submissions must be in English and written within the two calendar years prior to the award deadline for nominations. There are no limitations on length. Award winners will be invited to serve on future award committees.
All materials for this award must be received by March 1, 2015.
Send by email to all three committee members a nomination letter and the paper in PDF or word:
Dr. Casey Brienza Email: email@example.com
March 17, 2016, 9:00am-5:00pm, Chapel Hill, North Carolina, United States
For two decades, research has sought to understand serendipity and how it may be facilitated in digital environments such as information visualization systems, search systems, and social media. The motivation for investigating serendipity comes from its association with positive outcomes that range from personal benefits to global rewards. To date, research has made significant headway in defining and mapping the process of serendipity and new tools are emerging to support it. But we lack robust methods of evaluating new or enhanced features, functions, and tools.
The goal of the Workshop is to examine how we balance the tension between diversity and novelty in designing digital environments and subsequently how we evaluate the ‘serendipitousness’ of those environments. We invite participants from a range of disciplines (e.g., information science, HCI, digital humanities, cognitive science) and research perspectives to help us solve this wicked problem.
“Is There Anything Serendipity Research Can Learn from Creativity Research?”
John Gero, University of North Carolina at Charlotte and Krasnow Institute for Advanced Study, George Mason University
John Gero is the author or editor of over 50 books and more than 650 papers and book chapters in the fields of design science, design cognition, design computing, artificial intelligence, computer-aided design and cognitive science. He has been a Visiting Professor of Architecture, Civil Engineering, Cognitive Science, Computer Science, Design and Computation or Mechanical Engineering at MIT, UC-Berkeley, UCLA, Columbia and CMU in the USA, at Strathclyde and Loughborough in the UK, at INSA-Lyon and Provence in France and at EPFL in Switzerland. http://mason.gmu.edu/~jgero/
How to participate
Submit a 2-page paper using the ACM SIG Proceedings Template about your ongoing work, recent results, or study methods related to serendipity, either published, or work in progress. Possible themes for these papers may include, but are not limited to:
Evaluating whether or how digital environments enables erendipity
Use of qualitative methods such as interviews and think-aloud to evaluate user perceptions
Modifications to quantitative evaluation methods such as controlled experiments and log file analyses to test designs
Identification of factors other than the environment (e.g., context, individual differences, strategies, emotions, attitudes) that influence serendipity that should be taken into consideration during evaluation
Designing elements and functions in digital environments so that serendipity is facilitated
Application of theory and models in the design (or evaluation) of affordances related to serendipity
Design of serendipitous digital environments (e.g., information visualization systems, recommender systems, digital libraries, search engines)
Authors of selected papers will be asked to
A) give “lightning talks” on their work through a 5-minute presentation; or
B) participate in a “show and tell event” to demonstrate their project or prototype.
In addition, just prior to and during the workshop we will be conducting a whirlwind Delphi study to identify essential and novel measures for assessing “serendipitousness.” The results of the group effort will be discussed at the Workshop to highlight pertinent measures.
At least one author of each accepted paper must attend the workshop and all participants much register for the workshop.
**Submissions and inquiries can be sent to Lori McCay-Peet [firstname.lastname@example.org]**