Differences

This shows you the differences between two versions of the page.

home [2017/07/14 19:51]
elmer
home [2017/11/27 11:20] (current)
rosenholtz
Line 10: Line 10:
</WRAP> </WRAP>
---- ----
-<WRAP column 25% people>+<WRAP column 20% people>
//**Faculty**// //**Faculty**//
Line 46: Line 46:
<WRAP column 60%> <WRAP column 60%>
-Founded in 1994, the Perceptual Science Group of the Department of Brain and Cognitive Sciences at MIT does research in human visual perception, machine vision, image processing, and human-computer interaction. Both the Adelson Lab and the Rosenholtz Lab are located in Building 32. +The Perceptual Science Group of the Department of Brain and Cognitive Sciences at MIT does research in human vision, machine vision, human-computer interaction, and touch sensing for robotics. Both the Adelson Lab and the Rosenholtz Lab are part of the Computer Science and Artificial Intelligence Lab (CSAIL), located in the Stata Center.\\
-\\+
\\ \\
---- ----
-//** Special Event — Saturday, July 15, 2017 **// 
-[[https://sites.google.com/view/rss17ts/overview|{{:rss17-1.png}}]]+//**News... **// 
 +\\ 
 +**Peripheral vision, inference, and visual awareness**: An extended abstract is now available based on Ruth Rosenholtz' invited talk at the VSS 2017 Symposium, "The Role of Ensemble Statistics in the Visual Periphery." [[https://arxiv.org/abs/1706.02764|What modern vision science reveals about the awareness puzzle: Summary-statistic encoding plus decision limits underlie the richness of visual perception and its quirky failures]]
-If robots are to perform everyday tasks in the real world, they will need sophisticated tactile sensing. The tactile data must be integrated into multi-sensory representations that support exploration, manipulation, and other tasks.  +**Attention and limited capacity**: Ruth Rosenholtz has a new paper on what we have learned about attention by studying peripheral vision. This leads us to a new conceptualization of limited capacity in vision and the mechanisms for dealing with it. "[[publications:attentionhvei2017|Capacity limits and how the visual system copes with them]]."
-This workshop asks the following questions:+
-  * What kinds of tactile technologies are currently available, and what are needed? +**Modelling visual crowding**: Shaiyan and Ruth's work testing a unified account of visual crowding has been accepted to the [[http://jov.arvojournals.org/article.aspx?articleid=2498972|Journal of Vision]].
-  * What type of representations are best for capturing and exploiting tactile data? +
-  * How can tactile information be combined with other information to support specific tasks? +
-  * Can learning help to provide suitable representations from high-dimensional sensory data? +
-  +
-This workshop will bring together experts from the fields of tactile sensing, sensor design, manipulation, and machine learning. We expect that the pairing of theoretical and applied knowledge will lead to an interesting exchange of ideas and stimulate an open discussion about the goals and challenges of tactile sensing. +
-\\ +
-[[https://sites.google.com/view/rss17ts/overview|More information...]]\\ +
-\\ +
-//** Location **// +
-MIT Building 36 — Room 112 \\ +
-50 Vassar Street \\ +
-Cambridge, MA  02139 \\+
-[[https://www.google.com/maps/place/50+Vassar+St,+Cambridge,+MA+02139/@42.3613361,-71.0942629,17z/data=!3m1!4b1!4m5!3m4!1s0x89e370abdb5abad9:0xf77ea85672e15a0!8m2!3d42.3613361!4d-71.0920689|Directions, via Google Maps]]+**Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]].
-[[https://www.google.com/maps/place/50+Vassar+St,+Cambridge,+MA+02139/@42.3613361,-71.0942629,17z/data=!3m1!4b1!4m5!3m4!1s0x89e370abdb5abad9:0xf77ea85672e15a0!8m2!3d42.3613361!4d-71.0920689|{{:rss17-2.png|}}]]+**Taking a new look at subway map design**: The Rosenholtz lab's Texture Tiling Model was used to evaluate subway maps for the MBTA Map Redesign Contest. Check out the [[http://www.fastcodesign.com/3020708/evidence/the-science-of-a-great-subway-map|FastCompany Design article]], [[http://blog.visual.ly/how-do-our-brains-process-infographics-mit-mongrel-shows-peripheral-vision-at-work/|Visual.ly article]], and the [[http://www.csail.mit.edu/node/2094|CSAIL news article]]. The news was also picked up by a couple other media sources: [[http://blogs.smithsonianmag.com/smartnews/2013/11/how-much-of-a-subway-map-can-one-persons-brain-process/|Smithsonian Magazine]] and [[http://dish.andrewsullivan.com/2013/11/07/building-a-better-subway-map/|The Dish]]. Here's an older article about our research from  [[http://www.sciencedaily.com/releases/2011/02/110202215339.htm|Science Daily]]. 
 +<WRAP box> 
 +<WRAP box left 50%>**[[https://sites.google.com/view/rss17ts/overview|Tactile sensing for manipulation]]**\\ 
 +If robots are to perform everyday tasks in the real world, they will need sophisticated tactile sensing. The tactile data must be integrated into multi-sensory representations that support exploration, manipulation, and other tasks.\\ 
 +</WRAP> 
 +[[https://sites.google.com/view/rss17ts/overview|{{:rss17-1.png?250}}]] 
 +//      (workshop held July 15, 2017)//\\ 
 +</WRAP>
---- ----
- 
-//**In Other News... **// 
-\\ 
<WRAP box> <WRAP box>
<WRAP box right 50%>**[[http://news.mit.edu/2017/gelsight-robots-sense-touch-0605|Giving robots a sense of touch]]**\\ <WRAP box right 50%>**[[http://news.mit.edu/2017/gelsight-robots-sense-touch-0605|Giving robots a sense of touch]]**\\
Line 85: Line 76:
</WRAP> </WRAP>
---- ----
-\\ 
<WRAP box> <WRAP box>
-<WRAP box left 50%>**[[http://news.mit.edu/2016/artificial-intelligence-produces-realistic-sounds-0613|Artificial intelligence produces realistic sounds that fool humans]]**\\ +<WRAP box left 50%>**[[http://news.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|Fingertip sensor gives robot unprecedented dexterity]]**\\ 
-Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.\\+Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port.\\
</WRAP> </WRAP>
-{{youtube>small:0FW99AQmMc8}}+{{youtube>small:w1EBdbe4Nes}}
</WRAP> </WRAP>
---- ----
-\\ 
<WRAP box> <WRAP box>
<WRAP box right 50%>**[[http://news.mit.edu/2011/tactile-imaging-gelsight-0809|GelSight — Portable, super-high-resolution 3-D imaging]]**\\ <WRAP box right 50%>**[[http://news.mit.edu/2011/tactile-imaging-gelsight-0809|GelSight — Portable, super-high-resolution 3-D imaging]]**\\
Line 101: Line 90:
</WRAP> </WRAP>
---- ----
-\\ 
<WRAP box> <WRAP box>
-<WRAP box left 50%>**[[http://news.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|Fingertip sensor gives robot unprecedented dexterity]]**\\ +<WRAP box left 50%>**[[http://news.mit.edu/2016/artificial-intelligence-produces-realistic-sounds-0613|Artificial intelligence produces realistic sounds that fool humans]]**\\ 
-Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port.\\+Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.\\
</WRAP> </WRAP>
-{{youtube>small:w1EBdbe4Nes}}+{{youtube>small:0FW99AQmMc8}}
</WRAP> </WRAP>
---- ----
-\\ 
-\\ 
-**Peripheral vision, inference, and visual awareness**: An extended abstract is now available based on Ruth Rosenholtz' invited talk at the VSS 2017 Symposium, "The Role of Ensemble Statistics in the Visual Periphery." [[https://arxiv.org/abs/1706.02764|What modern vision science reveals about the awareness puzzle: Summary-statistic encoding plus decision limits underlie the richness of visual perception and its quirky failures]] 
- 
-**Attention and limited capacity**: Ruth Rosenholtz has a new paper on what we have learned about attention by studying peripheral vision. This leads us to a new conceptualization of limited capacity in vision and the mechanisms for dealing with it. "[[publications:attentionhvei2017|Capacity limits and how the visual system copes with them]]."  
- 
-**Modelling visual crowding**: Shaiyan and Ruth's work testing a unified account of visual crowding has been accepted to the [[http://jov.arvojournals.org/article.aspx?articleid=2498972|Journal of Vision]]. 
- 
-**Dr. Shaiyan Keshvari graduates!** Shaiyan defended his thesis, //At the Interface of Materials and Objects in Peripheral Vision//, on July 29th, 2016. 
- 
-**Dr. Phillip Isola graduates!**  Phil defended thesis, // The Discovery of Perceptual Structure from Visual Co-occurrences in Space and Time//, on August 17th, 2015. He has just started as a postdoc with Alexei (Alyosha) Efros at UC Berkeley. Check out a photo of  Dr. Isola's [[:gallery:defenseparties|photo]] celebratory reception, complete with detective costume. 
- 
-**Dr. Rui Li graduates!**  Rui defended thesis, // Touching is Believing: Sensing and Analyzing Touch Information with GelSight//, on April 30th, 2015. He is now working on a startup called  [[http://virtulus.com/|Virtulus]] in Cambridge. Here is a [[:gallery:defenseparties|photo]] from the post-defense reception. 
- 
-**Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]]. 
- 
-**Taking a new look at subway map design**: The Rosenholtz lab's Texture Tiling Model was used to evaluate subway maps for the MBTA Map Redesign Contest. Check out the [[http://www.fastcodesign.com/3020708/evidence/the-science-of-a-great-subway-map|FastCompany Design article]], [[http://blog.visual.ly/how-do-our-brains-process-infographics-mit-mongrel-shows-peripheral-vision-at-work/|Visual.ly article]], and the [[http://www.csail.mit.edu/node/2094|CSAIL news article]]. The news was also picked up by a couple other media sources: [[http://blogs.smithsonianmag.com/smartnews/2013/11/how-much-of-a-subway-map-can-one-persons-brain-process/|Smithsonian Magazine]] and [[http://dish.andrewsullivan.com/2013/11/07/building-a-better-subway-map/|The Dish]]. Here's an older article about our research from  [[http://www.sciencedaily.com/releases/2011/02/110202215339.htm|Science Daily]]. 
- 
- 
- 
- 
- 
-</WRAP> 
<WRAP clear></WRAP> <WRAP clear></WRAP>
 
home.1500076314.txt.gz · Last modified: 2017/07/14 19:51 by elmer