-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
238 lines (195 loc) · 17.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
<!DOCTYPE html>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>SpLU-RoboNLP 2021</title>
<link rel="stylesheet" type="text/css" href="stylesheets/normalize.css" media="screen">
<link
href='http://fonts.googleapis.com/css?family=Open+Sans:400,700'
rel='stylesheet' type='text/css'>
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"
integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7"
crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="stylesheets/stylesheet.css" media="screen">
<link rel="stylesheet" type="text/css"
href="stylesheets/github-light.css" media="screen">
<!-- Latest compiled and minified CSS -->
</head>
<body>
<section class="page-header">
<h1 class="project-name">SpLU-RoboNLP 2023</h1>
<h2 class="project-tagline">Third International Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics</h2>
<h2 class="project-tagline">
<!-- <h2 class="project-date">August 6th, 2021</h2>-->
</h2>
<br>
</section>
<section>
<!-- Static navbar -->
<nav class="navbar navbar-default">
<div class="container-fluid">
<div class="navbar-header">
<butfton type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="#topics">Topics</a></a></li>
<!-- <li><a href="#schedule">Schedule</a></li>-->
<!-- <li><a href="#schedule">Schedule</a></li>-->
<!-- <li><a href="#invitedSpeakers">Invited Speakers</a></li>-->
<!-- <li><a href="#submission-info">Submission</a></li>-->
<!-- <li><a href="#important-dates">Important Dates</a></li>-->
<!--li><a href="#demos">Demos</a></li-->
<!--li><a href="#panel">Panel</a></li-->
<!--li><a href="#submission-info">Submission</a></li-->
<li><a href="#organizers">Organizers</a></li>
<li><a href="#program-commitee">Program Committee</a></li>
<!-- <li><a href="TBA" target="_blank">Childcare</a></li>-->
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container-fluid -->
</nav>
</section>
<section class="main-content">
<!--<h2>Useful Links</h2>-->
<h2>Aim and Scope</h2>
Leveraging the foundation built in the prior workshops SpLU-RoboNLP 2021, SpLU 2020, SpLU-RoboNLP 2019, SpLU 2018 and RoboNLP 2017, we propose the thid combined workshop on Spatial Language Understanding and Grounded Communication for Robotics. Natural language communication with general purpose embodied robots has long been a dream inspired by science fiction, and natural language interfaces have the potential to make robots more accessible to a wider range of users. Achieving this goal requires the continuous improvement of and development of new technologies for linking language to perception and action in the physical world. This joint workshop is aimed at bringing together the perspectives of researchers working on physical robot systems and with human users, simulated embodied environments, and multimodal natural language and spatial language understanding to forge collaborations.
<h2>Topics of Interest</h2>
<ol>
<li>Aligning and Translating Language to Situated Actions</li>
<li>Evaluation Metrics for Language Grounding and Human-Robot Communication</li>
<li>Human-Computer Interactions Through Natural or Structural Language</li>
<li>Instruction Understanding and Spatial Reasoning Based on Multimodal Information for Navigation, Articulation, and Manipulation</li>
<li>Interactive Situated Dialogue for Physical Tasks</li>
<li>Language-based Game Playing for Grounding</li>
<li>Spatial Language and Skill Learning via Grounded Dialogue</li>
<li>Spatial information extraction in robotics, multimodal environments, navigational instructions</li>
<li>(Spatial) Language Generation for Embodied Tasks</li>
<li>(Spatially-) Grounded Knowledge Representations</li>
<li>Utilization and Limits of Large Language Models for Human Robot Interaction</li>
<li>Inclusive, equitable, and culturally-aware multimodal interactive technologies</li>
</ol>
<!--<a id="schedule" class="anchor" href="#schedule" aria-hidden="true"><span class="octicon octicon-link"></span></a>-->
<!--<h2>Schedule</h2>-->
<!--<iframe -->
<!--id="schedule-iframe"-->
<!--src="https://docs.google.com/document/d/e/2PACX-1vTAtdUWUCCLM2XBIhpyD-cTVuamwiMhTcZJ2iNv02ne2TJx_CfwqztxicubDmUl2xcFtS-B9AhOVeY3/pub?embedded=true"></iframe>-->
<!--<a id="invitedSpeakers" class="anchor" href="#invitedSpeakers" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Invited Speakers</h2>-->
<!--<div class="speaker-block">-->
<!-- <font size="3" color="black"><b><a href="http://knirb.net/">Thora Tenbrink</a>, Bangor University</b> <b>[<a href="https://splu-robonlp2021.github.io/slides/TenbrinkSPLU2021SmartRefSys.pdf">Slides</a>]</b></font>-->
<!-- -->
<!-- <p><b>Beyond physical robots: How to achieve joint spatial reference with a smart environment</b></p>-->
<!-- <p><b>Abstract:</b>-->
<!-- Interacting with a smart environment involves joint understanding of where things and people are or where they should be. Face-to-face interaction between humans, or between humans and robots, implies clearly identifiable perspectives on the environment that can be used to establish such a joint understanding. A smart environment, in contrast, is ubiquitous and thus perspective-independent. In this talk I will review the implications of this situation in terms of the challenges for establishing joint spatial reference between humans and smart systems, and present a somewhat unconventional solution as an opportunity.-->
<!-- </p>-->
<!-- <p><b>Bio:</b>-->
<!-- Thora Tenbrink is a Professor of Linguistics at Bangor University (Wales, UK), who uses linguistic analysis to understand how people think. She is author of “Cognitive Discourse Analysis: An Introduction” (Cambridge University Press, 2020) and "Space, Time, and the Use of Language" (Mouton de Gruyter, 2007), and has co-edited various further books on spatial language, representation, and dialogue-->
<!-- </p>-->
<!--</div>-->
<!--<hr/>-->
<!--<div class="speaker-block">-->
<!-- <font size="3" color="black"><b><a href="https://www.cs.cmu.edu/~./jeanoh/">Jean Oh</a>, Carnegie Mellon University</b></font>-->
<!-- -->
<!-- <p><b>Core Challenges of Embodied Vision-Language Planning</b></p>-->
<!-- <p><b>Abstract:</b>-->
<!-- Service dogs or police dogs work on real jobs in human environments. Are embodied AI agents intelligent enough to perform service tasks in a real, physical space? Embodied AI is generally considered as one of the ultimate AI problems that would require complex, integrated intelligence combining multiple subfields of AI including natural language understanding, visual understanding, planning, reasoning, inferencing, and prediction. While steep progresses have been witnessed in several subfields in recent years, the field of embodied AI remains extremely challenging. In this talk, we will focus on the Embodied Vision-Language Planning (EVLP) problem to understand the unique technical challenges imposed at the intersection of computer vision, natural language understanding, and planning problems. We will review several examples of the EVLP problem to discuss the current approaches, training environments, and evaluation methodologies. Through in-depth investigation of the current progress on the EVLP problem, this talk aims to assess where we are in term of making progress in EVLP and facilitate future interdisciplinary research to tackle core challenges that have not been fully addressed.-->
<!-- </p>-->
<!-- <p><b>Bio:</b>-->
<!-- Jean Oh is an Associate Research Professor at the Robotics Institute at Carnegie Mellon University. She is passionate about creating persistent robots that can co-exist and collaborate with humans in shared environments, learning to improve themselves over time through continuous training, exploration, and interactions. Jean’s current research is focused on autonomous social navigation, natural language direction following, and creative AI. Her team has won two Best Paper Awards in Cognitive Robotics at IEEE International Conference on Robotics and Automation (ICRA) for the works on following natural language directions in unknown environments and socially compliant robot navigation in human crowds, in 2015 and 2018, respectively. Jean received her Ph.D. in Language and Information Technologies at Carnegie Mellon University, M.S. in Computer Science at Columbia University, and B.S. in Biotechnology at Yonsei University in South Korea.-->
<!-- </p>-->
<!--</div>-->
<!--<hr/>-->
<!--<div class="speaker-block">-->
<!-- <font size="3" color="black"><b><a href="https://www.cs.princeton.edu/~karthikn/">Karthik Narasimhan</a>, Princeton University</b></font>-->
<!-- -->
<!-- <p><b>Language-guided policy learning for better generalization and safety</b></p>-->
<!-- <p><b>Abstract:</b>-->
<!-- Recent years have seen exciting developments in autonomous agents that can understand natural language in interactive settings. As we gear up to transfer some of these advances into real-world systems (e.g physical robots, autonomous cars or virtual assistants), we encounter unique challenges that stem from these agents operating in an ever-changing, chaotic world. In this talk, I will focus on our recent efforts at addressing two of these challenges through a combination of NLP and reinforcement learning — 1) grounding novel concepts to their linguistic symbols through interaction, and 2) specification of safety constraints during policy learning. First, I will demonstrate a new benchmark of tasks we designed specifically to measure an agent's ability to ground new concepts for generalization, along with a new model for grounding entities and dynamics without any prior mapping provided. Next, I will show how we can train control policies with safety constraints specified in natural language. This will encourage more widespread use of methods for safety-aware policy learning, which otherwise require domain expertise to specify constraints. Scaling up these techniques can help bring us closer to deploying learning systems that can interact seamlessly and responsibly with humans in everyday life. </p>-->
<!-- <p><b>Bio:</b>-->
<!-- Karthik Narasimhan is an assistant professor in the Computer Science department at Princeton University. His research spans the areas of natural language processing and reinforcement learning, with a view towards building intelligent agents that learn to operate in the world through both their own experience and leveraging existing human knowledge. Karthik received his PhD from MIT in 2017, and spent a year as a visiting research scientist at OpenAI prior to joining Princeton in 2018. His work has received a best paper award at EMNLP 2016 and an honorable mention for best paper at EMNLP 2015.-->
<!-- </p>-->
<!--</div>-->
<!--<hr/>-->
<!--<div class="speaker-block">-->
<!-- <font size="3" color="black"><b><a href="https://robotics.usc.edu/~maja/index.html">Maja Matarić</a>, University of Southern California</b></font>-->
<!-- <p><b>Socially Assistive Robotics: What it Takes to Get Personalized Embodied Systems into Homes for Support of Health, Wellness, Education, and Training-->
<!-- </b></p>-->
<!-- <p><b>Abstract:</b>-->
<!-- The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots for the ultimate robotics frontier: the home. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide, which must increasingly be addressed in the home. Socially assistive robotics has the potential to address those needs through personalized and affordable in-home support. This talk will discuss human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer's patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Research and commercial implications and pathways will be discussed. <em>Originally presented at Cornell CS Colloquium.</em> </p>-->
<!-- <p><b>Bio:</b>-->
<!-- Maja Matarić is the Chan Soon-Shiong Distinguished Professor in the Computer Science Department, Neuroscience Program, and the Department of Pediatrics and Interim Vice President for Research at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab, and the lead of the Viterbi K-12 STEM Center. She received her PhD in Computer Science and Artificial Intelligence from MIT in 1994, MS in Computer Science from MIT in 1990, and BS in Computer Science from the University of Kansas in 1987. </p>-->
<!--</div>-->
<!--<hr/>-->
<!--<a id="submission-info" class="anchor" href="#submission-info" aria-hidden="true"><span class="octicon octicon-link"></span></a>-->
<!--<h2>Submissions</h2>-->
<!--<p><b> Long Papers </b></p>-->
<!--<p>Technical papers: ACL style, 8 pages excluding references <br/></p>-->
<!--<p><b> Short Papers </b></p>-->
<!--<p>Position statements describing previously unpublished work or demos: ACL style, 4 pages excluding references <br/></p>-->
<!--<p><b>ACL Style files:</b> <a href="https://2021.aclweb.org/downloads/acl-ijcnlp2021-templates.zip">Template</a> <br/></p>-->
<!--<p><b>Submissions website:</b> <a href="https://www.softconf.com/acl2021/w21_splu-robonlp2021/">Softconf</a> <br/></p>-->
<!--<b>Non-Archival option:</b> ACL workshops are traditionally archival. To allow dual submission of work to SpLU-RoboNLP 2021 and other conferences/journals, we are also including a non-archival track. Space permitting, these submissions will still participate and present their work in the workshop, and will be hosted on the workshop website, but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission (non-archival) at the bottom of the submission form.-->
<!-- <a id="important-dates" class="anchor" href="#accepted-papers" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Important Dates</h2>-->
<!-- <ul>-->
<!-- <li>Submission Deadline: <del>April 26</del> <b>May 3</b>, 2021 (Anywhere on Earth) </li>-->
<!-- <li>Notification: May 28, 2021</li>-->
<!-- <li>Camera Ready Deadline: June 7, 2021</li>-->
<!-- <li>Workshop Day: August 6 (EDT), 2021</li>-->
<!-- </ul>-->
<!--<a id="schedule" class="anchor" href="#schedule" aria-hidden="true"><span class="octicon octicon-link"></span></a>-->
<!--<a id="schedule" class="anchor" href="#schedule" aria-hidden="true"><span class="octicon octicon-link"></span></a><h2>Schedule</a>-->
<!--</h2>-->
<!--<h2>Accepted Papers</h2>-->
<a id="organizers" class="anchor" href="#organizers" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<h2>Organizing Committee</h2>
<table cellspacing="0" cellpadding="0" style="width:100%">
<tr>
<td><li><a href="https://alikhanimalihe.wixsite.com/mysite">Malihe Alikhani</a></li></td>
<td>University of Pittsburgh</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="https://aishwaryap.github.io/">Aishwarya Padmakumar</a></td>
<td>Amazon Alexa AI</td>
<td>[email protected]</td>
</tr>
<tr>
<td><li><a href="https://eric-xw.github.io/">Xin (Eric) Wang</a></td>
<td>University of California, Santa Cruz</td>
<td>[email protected]</td>
</tr>
</table>
<!--Contact: <a href="mailto:[email protected]">[email protected]</a>-->
<a id="program-commitee" class="anchor" href="#program-commitee" aria-hidden="true"><span class="octicon octicon-link"></span></a>
<h2>Program Committee</h2>
<div>
<table cellspacing="0" cellpadding="0" float="left">
<!--<tr><td><li>Shiqi Zhang</td><td>SUNY Binghamton</td></li></tr>-->
</table>
</div>
*If you are interested to join the program committee and participate in reviewing submissions please email Aishwarya Padmakumar including your prior reviewing experience and a link to your publication records in your email.
</section>
<!-- Start of StatCounter Code for Default Guide -->
<script type="text/javascript">
var sc_project=11083511;
var sc_invisible=1;
var sc_security="2f97c6cf";
var scJsHost = (("https:" == document.location.protocol) ?
"https://secure." : "http://www.");
document.write("<sc"+"ript type='text/javascript' src='" +
scJsHost+
"statcounter.com/counter/counter.js'></"+"script>");
</script>
<noscript><div class="statcounter"><a title="web analytics"
href="http://statcounter.com/" target="_blank"><img
class="statcounter"
src="//c.statcounter.com/11083511/0/2f97c6cf/1/" alt="web
analytics"></a></div></noscript>
<!-- End of StatCounter Code for Default Guide -->
</body>
</html>