Please join us for another exciting BrainTalks UBC event! We hope to see you there!
Computationally modeling creative and vision-based art – approved for CME credits for a 1.5 MOC Section 1, 1 study credit
Date: Thursday, November 15th, 2012
Pre-event: 5:30pm wine & cheese
Talk: 6:00 pm
Location: Paetzold Lecture Theatre, Jim Pattison Pavilion, VGH
Keynote Speaker: Dr. Steve DiPaola
Through the use of computer modeling techniques, scientists are gaining insight into the creative genius of portrait masters. Cognitive scientist and artist Steve DiPaola will demonstrate recently published work, showing how portrait artists intuit the science behind visual and perceptual processing in order to guide the viewers eye and create narrative. Using eye tracking and computer modeling, the study shows convincing evidence that Rembrandt, in his late portraits, used textural control reflective of deeper scientific understanding, to exploit aspects of central visioning which predate the official scientific discovery of these areas. Steve DiPaola will also discuss his recent work showcased at MIT and Cambridge. This recent work uses computer artificial intelligence and studies of human creative thought, to attempt to model creativity on a computer (that paints portraits) as a technique to better understand how the creative minds of artists have unique approaches to solving problems. The study sheds light on how higher level vision, attention and creativity processes operate.
Steve DiPaola, active as an artist and a scientist is director of the Cognitive Science Program
at Simon Fraser University, and leads the iVizLab (ivizlab.sfu.ca), a research lab that strives to
make computational systems bend more to the human experience by incorporating biological,
cognitive and behavior knowledge models. Much of the labs work is creating computation
models of human ideals such as expression, emotion, behavior and creativity. He is most
known for his AI based computational creativity (darwinsgaze.com) and 3D facial expression
systems. He came to SFU from Stanford University. His computer based art has been exhibited
internationally in galleries in NYC, London and in major museums, including the Whitney
Museum, the MIT Museum, and the Smithsonian.
1. Discuss the potential use of computer modeling in brain, health and medical fields. What benefits or insights can be expected from this top down approach, to understanding the brain and associated mental processes?
2. Review current emerging research on cognitive mechanisms of human creativity. Discuss how functions in human creativity, such as the capacity to shift between associative and analytic thought, can assist in understanding higher level cognitive processing, and thus apply to mental disorders such as autism.
3. Vision and attention are coupled in certain cognitive processes, including internal modeling of the external world. How humans cognitively model their world in real-time, can begin to be parsed out using new computer modeling techniques, such as eye-tracking and sensing. Consider how computer modeling the approach of fine art masters, can affect our current understanding of how humans build representative models of their external worlds.
Opening 10 minute speaker: Miles Thorogood
The presentation will cover the approaches of information retrieval and machine learning for soundscape composition. Soundscape composition is the artistic combination of audio recordings to render to a listener a real or imagined place. Soundscape composition has many facets, including, soundscape classification, information retrieval, and composition techniques. At the forefront of current soundscape composition research is data mining online audio databases and machine classification of audio.
Miles’s research has approached autonomous soundscape composition using natural language processing and machine learning. Natural language processing lets a users enter a text, which could be a memory or from a book, and semantically linked audio files are retrieved. A machine learning algorithm then classifies the audio, based on human perceptual features, and segments the audio file for composition. The end result is a soundscape composition that represents the input text. Some of the possible applications of the research is toward deeper recollections of memory using soundscape, and augmenting digital story telling through soundscape composition.
1. The concepts and history of soundscape, including sound perception.
2. The state of the art in machine learning for soundscape composition.
3. Our research in NLP information retrieval, soundscape modelling and machine learning.
4. Invite the audience for input into possible applications of the system in their research domain.
Miles Thorogood is a PhD student at Simon Fraser University, School of Interactive Art and Technology, and a lecturer at Emily Carr University teaching Programming for Creative Practice. He works in the area of computational creativity, conducting research aimed at endowing machines with creative autonomous behaviour. Specifically, with the research and development of new systems for exploring and analyzing soundscape through online audio databases. His focus is on semantic representations in natural language, and the classifying soundscape recordings using machine learning algorithms. Whilst the application of this research is many, such as education, and environmental studies, the intention of the research is in the broader context of understanding creativity.