Technology has always very powerfully affected human lives. Be it from the first tools that the cavemen used to eat new types of food, to the industrialisation that changed our economy and the way we live, to the new all-digital era that keeps us connected at all times to people we know or don’t know, it changed the whole of humanity and the way we interact with each other to a deep level.
As designers and engineers, our role is to define and shape how people use this technology. But it is more interesting to notice how technology that shape the lifestyle of users. Let’s take travel: the creation and evolution of trains, cars and planes dramatically modified the way we live. We are no longer stuck in our hometown, requiring days or weeks of walking or horse commute to go to the bigger towns or to other countries. With its evolution, more and more people can explore the space surrounding them, as it is becoming cheaper and faster. They can reach people they have not seen for a long time. Industrialisation has changed the whole social and economical system: less time is needed for work and production, and this introduced time for leisure which was previously reserved to the upper class of society. The masses started to gain access to culture, knowledge and time for social activities, mainly thanks to the new tools that made production faster and more efficient.
We had the opportunity to interview staff from the McManus to get feedback from our experience prototypes, in order to get a more accurate idea of how concepts would be perceived by the public of the McManus. It was interesting to see how people who had no prior idea of what the brief was reacted to our concepts — as all the people we’ve experimented with so far already knew about the context (in 10 years, focus on groups and couples, and all surfaces are potential screens).
These people were not particularly tech-savvy either, which brought some confusion to our concept and made us rethink the way we should present it. For instance, while we thought about pointing to the labels with your hands in a Leap Motion kind of way, they thought that this would be a touch interface, or that you would have to use a smartphone to control how the labels change. They brought up comments that seemed irrelevant (for instance “it would stop working if there was a power cut”, but with the current system, if there was no light you could hardly read the labels either) but their general reaction helped us to realise precisely the kind of issues we had in our prototype — mostly that we only focused on the proximity/number of people aspect, and they thought hard to visualise how the different versions or extra interactive content would work since they weren’t present in the prototypes.
As we could obviously not bring the whiteboard to the McManus, we had to make a very portable version of our initial project on cardboard, reusing the original sheets of paper and using chalk to write on it and draw exhibits instead of having real objects. Although it looked even more precarious than the whiteboard version, it succeeded in conveying the idea. The electronic version, however, was much less successful and actually confused users — they thought that the behaviour we used to demo it (face recognition, therefore needing to be exactly in front of the label) would be the one we wanted to have in the final product (which just needed presence and relied on proximity, no matter where the head or gaze goes).
Our first experience prototype was created in a few minutes using the elements available in the DIxD studio: a whiteboard representing the back of a display case, various elements (wood, tools) that we could attach to it to represent exhibits and large sheets of paper that we could fold to show different versions of the label text. This allowed us to create various iterations of our prototype extremely quickly, which allowed us to make a lot of progress in less than a morning.
This quickly proved the importance of prototyping — although it looked really cheap and unsophisticated, it simply worked to demonstrate, to scale, how our project worked. We could try it out on people to see how they would react to such a system and change it instantly just by drawing on the board or moving the objects, which allowed us to refine the kind of interactions happening (as opposed to the wireframes and storyboard, where we got a different kind of feedback which was more related to the concept than the way it was implemented).
Alongside this first prototype, we decided to push it further and create an electronic experience prototype, using Processing and a webcam to demo the interaction in a realistic way. For this we used a library called OpenCV, that helped us detect faces on a webcam images, and we mapped it to squares representing labels; when a face is exactly on the label, the label turns into the longer, smaller version, and goes back to its original form when the face goes out. The advantage of this was that we can have as many people in front of the screen and adapt the labels accordingly; on the “manual” prototype, we would have needed several people to change the labels very quickly when someone comes in or out. This also allowed us to experiment with the kind of animation that could be used to change the text, to make it as clear as possible.
Although both prototypes have proven very useful into the refinement of our concept and defining the interactions, I felt that the digital prototype wasn’t a very efficient one — learning how to use OpenCV was interesting and fun, but it took too much time to build for the benefits it brought.
After completing our insight cards, we chose one to work on for our final project. Alan and I decided to focus on a mixture of two cards, the poor labeling system and the different behaviour of groups compared to single people.
Our concept aims to remove the current small plastic labels in favour of screens in the back of display cases and walls next to paintings. This allows us to make dynamic labels, that can be interacted with, with direct interaction or through less conscious sensors. Such a system would allow to:
- Provide labels closer to the items they are referring to;
- Reduce the social awkwardness in museums, by showing large summaries to groups of people that they can all see, and smaller, longer versions to individuals;
- Offer different versions of labels, with simplified variations for kids, ‘technical’ versions with extra details for people really interested in it, and additional stories;
- Add context to exhibits by adding videos or extra pictures;
- Allow the users to change text size depending on their visual preference, and helping disabled or poor-sighted people;
- Display relationships and links between designated items.
- When more than two people are detected, the labels get bigger, with just a short summary being displayed. Everyone can read them and there is no need to wait for other people to go to another display.
- When there is only one or two people who get closer to the display case, the labels will change according to what the visitor is reading. The text will become smaller to accommodate for a longer description and give more context.
- When a visitor is pointing at an item, the extended text appears, no matter how many people there are. Extra information is displayed: additional content such as videos, alternative versions, and ability to change the text size.
- The project brief sets the year in 2023 and assumes that after extensive refurbishing, any surface in the museum to be used as a screen [↩]
For our first assignment in this module we were asked to go to the McManus galleries to observe groups inside the museum and try to gather as much insight from their behaviour — from the experience they have by watching paintings, items in display cases, how they react with their group or partner, how different they behave depending on their age, gender or the type of group they are in (friends, couple, family with kids, visit group…), how well they navigate into the museum (instinctively or by relying on the wayfinding system), how their normal flow change when other groups are around, and how they are using the existing technology (interactive kiosks/screens and phones), for example.
I teamed up with Ioana and we went there twice, for a couple of hours each time. The experience proved to be interesting — sometimes boring when there weren’t many people, but most of the time, surprising and yielded results we didn’t expect. People weren’t really involved with the interactive kiosks or even the labels; they tended to read them very quickly and go away. However, at some point someone from a group started to tell a story to the rest of the group, about Dundee’s history. He detailed how the population lived in that time, how the streets have changed, and showed the differences on the scale model. As he was speaking to the rest of his group, people from other groups started to come along and listen to him as well, even asking questions. This demonstrated how storytelling and informal presentation were much more engaging to people than long, boring text labels.
Another point, related to the first one, is how labels were displayed in the museums. These information plates were rarely read fully, most of the time people seemed to just glance at the title, but did not look further. We went and asked someone how they felt about the labels, and they found they were too long and particularly hard to read in some exhibits, as they were too low, there wasn’t enough light and the text size was too small for her. This also proven to be an accessibility issue when we saw another group with disabled people in wheelchairs; their friends had to read them the labels for them, as they could not read them.
Another insight I noted was a kid who read a few labels and constantly asked his parents to explain some terms to him. I found it discouraging that labels sometimes used complicated or technical language.
Lastly, we noticed that single people took their time to read labels much more than people in groups or couples. Groups had the social need to keep up with the rest of their group and could not stay long before each exhibit; this was a problem as people could then miss on some interesting information from the labels, which is sometimes even crucial for understanding an exhibit.
As mentioned in an earlier post, Knittern was not an easy thing to code. First off we had to think about our database, as each field would have a number of relationships and missing a crucial one out from the beginning would have required drastic changes in our code later on. We started by establish a list of requirements, to define all the minimum functionality that our prototype will need to have, then designed the database structure, in a way that is open enough to allow us to add optional features later on without having to refactor the whole code.
The central point of the social network, the application, was the one that required the most work to be functional. It was tricky to find a technology that would allow us to manipulate an image live; the most obvious options was to use Flash, but none of us knew how to use it and we didn’t want to use it as it is becoming obsolete and would have been tricky to generate thumbnails. The HTML5 <canvas> object, that allows us to draw directly in the browser with vector instructions, while more modern and accessible, would have required very complex coding that would exceed the time frame for the project, especially as, again, none of us had learned how to use it before.
We ended up choosing to use CSS3 patterns. These are not extremely flexible (we cannot have complex modifications) but they fit the purpose of mocking up our social network to demonstrate the concept; we can change the colours and their size live. The downside is that they are not compatible with older browsers, and even as of now, some pattern types (that use radial gradients) work exclusively in Firefox and not in Safari.
After researching on several paths and different concepts for our social network, we decided to go with the second option, a web-based pattern generator that would allow people to edit other people’s patterns, comment on them and post pictures of finished knitwear made using a pattern on the website. This concept, called Knittern, is based on the ever-growing trend of ‘open source’ outside of the computer development world: we do not only share freely the instructions for an original pattern and allow their redistribution, we also encourage and provide the tools to modify them and allow each knitter to create patterns that suit their needs or that they think they can improve.
This idea of “remixing”, that could be assimilated to what is called “forking” in the open source world — duplicating an existing project to create a variation of it to suit particular needs or fix bugs — is the foundation of our social network. Creating and composing an original pattern is an extremely complicated task for knitting beginners, but by allowing to easily modify existing ones, we open the door to an infinity of new patterns varying by size, colour and other variations, that anyone can make. To add more depth to this, our social network include comments to help knitters discuss about their patterns, a ‘like’ function to promote the best patterns created by the community and help skilled creators/remixers to stand out, and the ability to post pictures of pieces of finished knitwear that were done using a specific pattern.
Implementing Knittern was a complicated task. From the content side, we needed a lot of information to figure out what kind of discussions knitters have when helping each other with difficult patterns, the type of comments they make, the way they behave on other social networks, and a great number of images and pre-existing patterns to fill the website with content to have a realistic prototype. From the technical side, we had a complicated database system (patterns are linked to other patterns and we need to keep the relationships between remixed patterns, we store different types of data) and, most of all, a very ambitious application, the pattern generator. Finally, from the design side of things, we wanted something simple, that puts content forward, that is accessible to everyone and that does not look too cliché — most knitting websites on the Internet have an ultra-feminine aspect, or a very ‘web 1.0’ look-and-feel, as if the websites have not been updated since 1996.
We managed to grasp most of the content from our initial research, and it was not too difficult to find patterns and discussions around them. For the sake of prototyping and ‘faking’ content to demonstrate how users would use our network, we wrote several dummy user profiles, based on real-life profile types that we would use later on in the database, and gathered various comments to link them all together.
As part of our second year’s Designing Social Networks module, we had to research a design, arts and craft subject to find a purpose to our network. Knitting was an interesting subject to research because as a widespread handicraft, it has an extremely large community, not only restricted to a small group of designers but rather lead by millions of hobbyists around the world. Hand knitting is built today around social connection, sharing with friends and family, and a “do-it-yourself” culture more than for mass production, being replaced by machines: these very local connections are repeated on Internet, with a lot of small, sparse communities and resources that are hardly centralised and global.
This makes it interesting for us: Internet can be used to reach a wide audience, to remove language and countries boudaries, yet the knitting community, although early adopters of the web, prefers to stay together in small groups, due to the social aspect of the craft. This is a limiting aspect for a lot of users: there is a language barrier, a cultural aspect (techniques and clothing styles are different in regions of the world), a difficulty to find smaller websites and communities that you have never heard of, despite having a very rich and varied content.
Creating a social network for such a wide web of small resources is no easy task. It seems to be a built-in mindset of the knitting community to work in small local groups and not participate in bigger structures where everything is grouped together; we had several choices from there. We could work on something like Twitter, that acts as a central point to link to existing resources with few in-depth discussion but a strong social side — the most worthy content is likely to be shared more by users with similar interests. This, however, is not a perfect solution: it is not very engaging for knitters, because the interest communities are likely to be the same as they are currently on the Internet, and the boundaries that we are looking to remove will still be there.
Another option, which is not based on bringing resources together, but rather trying to share and make the social side of knitting more public. Focusing on the experience of the knitting, the story behind the knitted object and its making, instead of working on the practical side of it — how to make it, or how to sell it. We chose to work more on this idea, and it became our first concept.
Another of our concepts revolves around knitting patterns: often coming from books or websites, they are limited in number and it is hard to see how they will turn out if you use yarns with different colours or a slightly bigger gauge. Combining this with the language barrier problem, we came to the idea of a computer-synthetised definition of a specific pattern, which can be created using a pattern generator and then output the instructions to a human-readable language. Such a generator would allow to output instructions of a pattern in any human language, effectively deleting the language barrier usually encountered on knitting websites, and allows other users to “edit” or remix existing patterns to create their own variations.
A third concept is a social network not aimed at hobbyists but professionals, desiring to create a professional network, find work and clients. This should be a more high-end and refined interface, with a specific business model, aimed at the fashion industry. The ability to put a portfolio and other references is often lacking in such professional networks, and because textile designers are often specialised into a particular type of craft — mostly sewing — it becomes harder to find knitters.
We wanted to explore as many different aspects as possible, seeing of how different can the possibilities be with such a craft. However it is a hard decision making process and we still have a long work to do to find other ideas to make our final prototype.
Extensively test equipment/software before interview:
One of the lessons learnt from the project was how we should extensively test equipment before the interview is to take place. Before the interview took place we ran into difficulty with regards to recording the interview’s audio as the programs we thought would work did not. Therefore, we had to rely solely on the recording the Mike Vanis’ iPhone with no backup recording. This meant we were panicking a bit before the interview when we should really have been prepping the questions and trying to relax for the interview. Luckily, the iPhone recording worked well, but this is something that would be better avoided in the future.
Secure appropriate venue before interview:
Another of the lessons, which we learnt as a group from the project, was that you should secure an appropriate venue in good time before the interview is to take place. We ran into an unexpected delay when we were conducting out initial chat, as some people walked into the studio and began working just before we were about to begin our Skype call. This meant we had to delay the interview by upwards of half an hour as we searched for an alterative venue in a less noisy location. This could have been construed as unprofessional by the interviewee.
Double-check everything before proceeding
We came across a few inconvenient issues during the video editing stage. We repeatedly had to restart the process of the mixing the project into single video due to missing video clips and audio in the wrong place etc. We should have taken more time to check it over and then process it instead of checking on YouTube. I think this also applies to the question creation process. Although I believe we did this well if we had not proof read and took advice from others we may have not been as prepared for the interview.
Be Flexible when conducting interviews
In the classroom the idea of a group sounds like a great idea. A few people getting together and sharing ideas on the same project; more minds equal better result. In reality though it can be a bit different. I do agree that we had a good group and we all had good ideas but I think that it was sometimes difficult to find times when we could all meet up and collaborate and/or communicate with our interviewee. If I were to do this again I would make more of an effort to put projects first and social life a close second.
- "Because you listened to The Doors" welp, nice algorithm Grooveshark http://t.co/WmT2JfCsY6 about 6 days ago
- Also, that magic console that is empty when you need it and only show its contents when you close the Android emulator about 6 days ago
- Eclipse is a tool of the devil. "You need to update the ADT to use the latest SDK. Click Update." Fine, go on th- "No updates were found" about 6 days ago
- The Internet is now complete: you can translate pages into (and from) Klingon http://t.co/5NUMtJwImY 10:39:19 PM May 14, 2013
- "Potato successfully saved" < times where I love my job 11:43:04 AM May 14, 2013
- RT @Chousse: "best viewed in Google Chrome on a decent machine" is the new "upgrade your flash player" 12:18:59 PM May 13, 2013