Lucid

    Sleep to educate

    22 February 2013 by Luke

    ##Hypnopaedia

    Well, wouldn’t it be good if you could just fall asleep and wake up knowing fluent French, or Spanish? Hypnopaedia is just that, the ability to educate the mind while sleeping. Looking at authors of science fiction such as Aldous Huxley you can see the ‘technofuture utopia’ that we’d all be wondering if one day we ended up in with the ability to educate yourself while you sleep. In the book a Brave New World the ability to educate people’s minds while they sleep is used to condition people into staying within their designated part of society. The main antagonist ends up hanging himself, possibly a sign that maybe this shouldn’t be the way forward with this method of education. The novel, A Clockwork Orange, also sees sleep-learning techniques used upon the main character, however it is to no avail.

    There was initial optimism about the ability to learn while you sleep and research in 1958 by Dement and Edware Wolpert pointed to external stimuli being able to be interpreted by the mind and incorporated into the dreams of participants, with one subject being squirted with water reported a dream of a leaky roof after being awoken afterwards. Conduit & Coleman in 1998 found that between 10-50% of people report external stimuli being incorporated into their dreams.

    While incorporating external stimuli is almost seen as a lower state of function, actually educating the mind during sleep is another layer since the ability to desseminate information to later recall it requires greater mind function. A classic example of incorporating external stimuli within a dream would be where a sleeper hears their alarm in their dreams just before waking. A study by Charles W. Simon and William H. Simmons 1956 found that recalling material when awake was low unless alpha wave activity was observed, which also meant that the subject was going to awaken.

    ##Pavlov and his dogs

    The ability to teach the body to respond to a stimulus can be found from Pavlov’s famous experiment involving dogs, a bell, a bowl of dog food and a lot of dog slobber. Pavlov conditioned the dogs to salivate if he conditioned them to associate a bell with being fed, thus if he rang the bell but didn’t feed them the dogs would still salivate. However this required a positive physiological response from the body which in Pavlov’s case was the food. However in 2012, Anat Arzi of the Weizmann Institute of Science in Israel used classical conditioning to teach fifty five participants to associate odors with sounds as they slept. Anat Arzi is quoted as saying “There will be clear limits on what we can learn in sleep, but I speculate that they will be beyond what we have demonstrated.”

    ##Directing Dreams

    Luke Jerram, a digital artist, has worked on projects involving this ability to enhance and alter the dreams of participants in his installations - a notable installation being Dream Director which aims to raise questions about the rules of interaction and boundaries of science and art. The participants within the installation are allocated sleep pods, each one having specific sounds that are used to try to alter their dreams - during the day the public are invited to log their own dreams or view the logs of the participants dreams. While Jerram aimed to simply mold the dream around an external stimuli in the same fashion as Dement and Edware Wolpert in 1958, we would like to further push them into being able to recall the information that we pass to them during their sleep.

    Lucid.app

    21 February 2013 by Ben

    As mentioned in previous posts, Lucid revolves around the exploration and experimentation in the field of Hypnopaedia – or sleep-learning.

    From the early stages of development, we knew that we would need some kind of device that could be used to track the varying sleep cycles that a user will go through nightly. Our initial idea was to use an Arduino attached to various sensors such as accelerometers and microphones. We eventually realised that this solution would result in a bulky and impractical product, and began to look elsewhere.

    Luckily, the iPhone 4S in my left pocket happened to support every feature that we wished to implement – accelerometer, microphone, speaker – and it did it all in a beautiful little package.

    So, we decided to build an iOS app, allowing us to access all of the native hardware features that we would require for our project.

    As outlined in ‘Dreams’, the aim of the app was simple: Detect when a user is in a state of REM sleep, and begin to play them audio. The user will then be subjected to a ‘test’ at a later date to determine whether or not they absorbed the information played to them whilst they slept.

    REM Detection

    The first step of the ‘injection’ process was to determine when a user was in REM sleep. Apps such as Sleep Cycle achieve this by waiting for complete stillness – one of the traits of sleep paralysis associated with REM sleep. We achieved this is a very similar way. Lucid will start a 30 minute timer when activated and if the timer is uninterruped by movement, we know it is likely that the user is in REM sleep.

    As soon as the user enters back in to a lighter state of sleep, however, the timer will be interrupted, reset back to zero and the process is restarted all over again. This process is likely to occur 3-5 timer per night.

    Playing Audio

    Now that we know our user is in REM sleep, we are ready to begin sending them audio. The ‘test’ involved a user correctly solving three puzzles: One a simple deduction puzzle, the next a descriptive stoytelling task, and the final a musical composition challenge.

    Lucid will play clues during a user’s REM cycles that relate directly to these puzzles, hopefully ensuring that the user will be able to complete them successfully when faced.

    An alarm clock with a twist

    Now, we had a fully functioning app, ready and waiting to inject thoughts in to those willing to participate, but we wanted the app to appear as though it was nothing more than just an alarm clock.

    On top of these features, we added all the basic functionality ofma standard alarm clock, allowing for the user to use the app on a daily basis without suspecting anything.

    The Challenge

    20 February 2013 by Flo

    Prior to its current development the second half of our experiment, the challenge, underwent many heartbreaking iterations. The revision closest to the final concept was essentially the sum of our individual pursuits:

    • “I want to explore the achievability of making pseudo-generative environments within Unity using our data as parameters and giving people the ability to explore (see Bad Trip and the lovingly-crafted indie first-person-island-synthesizer Proteus).”

    • “I want to create something with a collaborative element, a game-like experience that features an array of puzzles whereby the hints cannot be found within the experience itself and would get people to share their insights with each other.”

    • “I want to make an iOS app that works and looks gorgeous! Preferably it would also make use of a sophisticated algorithm and does more than we want the users to believe it does.”

    • “I want to do something crazy in the dome, using biofeedback sensing technology.”

    Building Dreamland

    Designing a Surrealist dream world is pretty much the ballpit equivalent for every game environment modeller. You will literally not run out of ideas and you might spend hours building an area where everything is made of candy floss but it doesn’t fit in with any other aspect of the environment stylistically. A common theme was needed other than just dreams, like the Land of Cockaigne or maybe a theme park setting.

    From back when we were still exploring the possibilities of having the player relive a deterministic sequence of common dreams we introduced the concept of the hotel lobby, a central hub that leads to a corridor of numbered dream rooms. Breaking up the game world into rooms would not only ease the development dramatically, but would also inform a lot of design choices as we were now dealing with something more concrete, man-made and of course interior.

    After we settled on making a single-player experience with the focus on the dome environment it dawned upon us that the only way our game should be experienced was once. With that finiteness came the whole idea of the recruitment process, which would then dictate the world’s parcour-like layout and would give us more ways of fleshing out the space with more meaningful geometry (like hinting at areas out of your reach to make it feel less linear).

    For the adventurous of you, here is a curated set of sketches depicting various environments.

    Developing a Visual Style

    For visuals we mostly inspired each other during the process. Initially when I told Ben to make me some textures with “dystopian art déco grandeur and a dash of communist propaganda” he produced what from then on was known as the Lucid logo. Ben’s simple geometric style, in combination with several examples of minimalist architecture then influenced the design of the lobby and so on and so forth.

    Moodboard

    Only Luke and I were brave enough to build in three dimensions and since he had not touched a 3D package in years we decided to go for plain minimal geometry and only use textures where necessary to create a more homogenous picture. In addition we tried giving each room a unique feel by trying out different lighting setups.

    The models turned out fine, yet it would take a little more effort to make everything look good in the dome environment. For starters, the dome camera rig (or perhaps my graphics card) did not allow for soft real-time shadows so they had all to be calculated and baked into the environments in advance using Unity’s BEAST lightmapper. Also making the switch from blender to Cinema4D mid-development to be able to incorporate Luke’s 3Ds Max models better may not have been the best idea in terms of productivity.

    Cello model

    Experiment + game + play

    As mentioned earlier, we settled on three puzzle rooms, each testing for different stimulae the player would have received before playing the game: a simple choice out of three options, a reconstruction of a story through objects depicting events or turning points and a re-creation of a musical track with the help of the given instruments.

    Originally we wanted the iPhone - the one responsible for moving the player around - inside a little ball to make it feel more natural, so we didn’t consider using the phone’s screen as an input mechanism until late. This meant we came up with ways of interacting with the environment without buttons instead. Easier said than done, yet with a little inspiration from kairo, it certainly didn’t seem all that impossible.

    For the first two challenges the player would simple go down a corridor and approach an object to activate it, only for the musical challenge we added some pressure pads on the floor to make it less fiddly. Well it wasn’t ideal, but playable and I would also like to apologise for my inconsistent performance during the recording of those sound clips :>

    For the unlikely event of the participant getting all the answers right, he/she would receive a keycard that would allow him/her to explore other parts of the starting area. We didn’t feel the need to punish the player for getting things wrong, however we made sure that there would be enough audible feedback to not loose the player along the way.

    We felt that there had been scope for more rooms, but in regards to presentation time and player satisfaction we stopped at three. In the end I was glad that Luke offered to help out with the game logic here and there - despite no prior knowledge of C# - as the creation and the sorting out of assets could have easily been a one-man job on its own. I can finally understand how there are people across several industries who “just” do lighting..

    Let the polygons speak for themselves

    Lobby Crossroads Choice Room Story Room Theatre Room

    Technologies used and abused during the creation of this project

    git, Photoshop, Unity, Sublime Text, blender, Cinema4D, 3Ds Max, XCode, Phonegap, NodeJS, AfterEffects, GarageBand, jekyll, Paper (iOS) and last but not least, the internet.

    The Immersive Environment

    17 February 2013 by Sam

    “Participants within our experiment are invited to take part in the ‘testing’ phase whenever they feel ready to do so.”

    The ability to implant thoughts into a participants mind is the main focus for our experiment. With the scepticism behind Hypnopaedia we decided to help our participants along by making the environment for recall as immersive as possible, with the hope than recall would be improved. The testing phase will be conducted within the immersive vision theatre at Plymouth University, which is home to a hemispherical dome, tilted audience seating and an atmospheric environment. We hope that the increased peripheral vision and enhanced audio visual experience will help participants to recall the information from their sleeping state for use within our experimental arena.

    Enhancing the Interface

    Each participant will need 3 tools to fully interact with our experiments, 2 iPhones and a NeuroSky MindWave (All are provided). While the phones are used to allow a user to control their experience the MindWave device is intentionally added to help a user become more aware of specific states they’re currently in.

    We employ functionality with the NeuroSky Mindwave to control a to blurring effect within the experiment, this blur effect is in direct correlation to a participants relaxation state and diminishes as the subject becomes more relaxed. We choose this simple blur effect to act as a feedback loop for the subject allowing them to constantly asses how relaxed they are.

    Player Controls

    Early on in this project we transitioned from a co-op experience to a single player one, this left us with a detached controller model aimed at two players working together to achieve movement and view. Instead of throwing our controls into a gamepad, we decided to keep our controls split and allow the player to use both at the same time. Both Movement and Look are based around using an iPhone’s gyroscope and accelerometer.

    Movement is via holding the phone as it acts like a controller, Tilting forward and backward moves a subject in the respective direction and tiling left and right rotates the subject to allow multi directional movement.

    The ability to look around is slightly different and involves an iPhone attached to the MindWave sensor situated on a subjects head. The act of attaching an iPhone to a subject head may seem ominous or unnecessary, however it adds an interesting element of natural control. The iPhone controls where the character within Unity is currently looking, there is such an extreme field of view within the IVT that this becomes a subtle movement. While naturally looking at different aspects in the environment the centre position slightly changes in reaction to the subject moving their head.

    Cogs in the machine

    To achieve the interaction between the mobile accelerometer for our controller and the experiment created within Unity 3D we used a number of real-time web technologies. Using Node.js, Socket.io and Node-OSC we were able to create seamless interaction and enhance our project with interaction devices of our choosing. All of the code behind our project is publicly available in our Github repo, here.

    Dreams

    13 February 2013 by Ben

    In recent months, the iOS App Store and Google Play have seen an influx of sleep analysis applications that allow a user to monitor and analyse numerous aspects of their everyday sleeping habits, including duration, the amount of time spent in light sleep vs. REM sleep and the overall quality of sleep, based on various factors.

    Using sleep data collected by the inbuilt accelerometer on supported iOS and Android devices, the developers of these apps are also able to provide extra functionality beyond tracking and analysis such as the ability for a user to be awoken gradually, using algorithms that factor in details about the user’s sleep patterns.

    Sleep Cycle for iOS does exactly this. The app knows when a user is in a state of ‘light sleep’, and is able to awaken them during one of these phases to ensure a fresher start to the day.

    Hypnopædia

    Hypnopædia, (or Hypnopaedia for those of us who prefer our letters disconnected from one-another) is the practice of conveying information to an individual whilst they sleep. Supposedly, sleep-learning is moderately effective when subjects are fed direct passages or facts, word for word. Many scientists and researchers, however, disagree, with Charles W. Simon and William H. Emmons stating publicly that learning whilst asleep is “impractical and probably impossible”.

    Probably impossible. There’s only one way to find out…

    Sleep cycles

    Taking a leaf out of the books of sleep cycle analysis app developers, Lucid will determine whether a subject is in R.E.M sleep (and therefore dreaming), or in a lighter state of sleep, possibly partially concious.

    It is important for us to determine when our subject is in deep sleep, as this is the only time when we can be sure that they are truly unconcious. When in deep sleep (R.E.M), the body places itself in to a state of complete stillness, otherwise known as sleep paralysis. Luckily, this is relatively simple to monitor using the sensors available to us in modern mobile devices and will form the basis of our project.

    The Big Plan

    The ultimate aim of our project is simple: to discover whether or not a subject is able to hear, understand, learn and be influenced by audio that is presented to them whilst in a sleeping state.

    Part game / part experiment, players will be presented with audio samples of differing qualities: simple, descriptive, musical etc. (these will be discussed in-depth in a future post).

    The player’s goal is to complete a number of puzzles in a 3D environment that require specific cues and clues to complete. The kicker? The clues are only presented to the player during sleep. Whoa.

    A player is free to subject themselves to the audio every night, for as long as they would like, with only one opportunity to complete the puzzles once they have begun. It is up to a player to make the move from the ‘training’ phase to the ‘test’ phase when they feel confident enough, whether it be consciously or subconsciously.