If we are to understand language acquisition, then we must understand how the visual system influences our language learning capabilities.
Subjects will be split into two groups: those who learn Russian with me in person and those who listen to a recording of me.
I will teach them five basic Russian words and ask the subjects to memorize them - the pronunciation and the pictoral meaning (Instead of telling the subjects the English translation of the word, I will show them a picture that corresponds to that word's meaning).
Three days later, I will meet with both groups and prompt the subjects, with the same pictures, for the five learned Russian words.
If they remember the word, I will record, on a scale of check minus, check, check plus, how close the pronunciation is.
I will draw conclusions from this data about whether seeing my mouth/lips while learning the words helped the subjects remember
and pronounce the words better.
Collected and analyzed data relating visual input to language learning.
Demonstrated that having visual input increases a subject's ability to remember the words and pronounce them correctly.
Reimplementation of Melanoma Recognition Program
In order to take the field of AI to the next level, we need to focus on developing technology to enhance humanity as opposed to finding ways to replace it. One way we can do this is to take AI concepts and apply them to the field of medical diagnosis. A recent example of this is the system Codella, et. al developed to detect melanoma from images. If we are to understand the impact AI can play in medical diagnosis and whether or not the system can be improved, we must reimplement the system described and attempt to make improvements on it.
Background research on system setup and familiarize self with unfamiliar concepts.
Implement system on TensorFlow or similar neural network framework.
Find ways in which system could be improved to attain better classification accuracies.
Improved system developed by Codella to achieve higher accuracies when detecting melanoma from an image.
Demonstrated that AI can play amazing role in the field of medical diagnosis.
Dillon Dumesnil and Alexander Nordin
Stock Shift Prediction using News Articles
If we can anticipate market movement based on news feeds in real time, then we can garner insight into the way the market reacts to various types of news.
Research project with implementation.
Scrape information from news websites regarding temporal financial information.
Use scraped information to build a model of how the market shifts based on various characteristics of the news event. For example, severity, breadth, domestic, international etc.
Once the model is trained, use it to predict stock market movements in real time.
Measure model efficacy in the real world. Basically, would we make money? Probably no.
Collected and analyzed data relevant to establishing whether stock market trends are reflected in popular news articles.
Demonstrated current headlines have (or do not have) a direct correlation with the stock market.
Emanuele Ceccarelli, Alec Anderson
Game-playing Programs That Explain Themselves
Game-playing programs such as AlphaGo have beaten human experts at complex games previously thought to be computationally intractable. However, these programs tell us little about human intelligence, partially due to their use of neural networks that lack intuitive explanation. If we are to understand how humans play these games, these programs must be able to explain themselves.
To simplify this problem, we will restrict our analysis to a simpler game, Checkers, to allow us to develop methods allowing the system to explain its decisions in a human-understandable way.
Research project with implementation, partially a reimplementation
We will review existing literature and methods for self-explaining neural networks
We will implement a system similar to AlphaGo: a progressive deepening algorithm that uses minimax search with alpha-beta pruning, using a heuristic function always equal to zero as a starting point.
We will train our heuristic function using a neural net with checker board positions as input. The board positions will be inputted as vectors of discrete values denoting empty cells, red pieces, and black pieces. We will generate board positions to train either by using data available online or playing our progressive deepening algorithm against itself.
We will overlay a human-understandable pattern finder over this system
We will compare the decisions of the system with patterns found by inspection, looking for notable inconsistencies where its decisions do not follow our intuition.
We will draw conclusions around how effectively this implementation can explain itself.
Evaluated current methods used in self-explaining neural networks
Built an implementation of progressive deepening and applied it to playing the game of checkers.
Extended this implementation using a heuristic function constructed with neural nets trained on many games of checkers.
Evaluated methods our implementation can use to explain its behavior.
Discussed the effectiveness of explanatory methods and posed possibilities for future methods to be explored.
Image Recognition - A symbolic AND connectionist approach
If we are to build more human and more powerful image recognition systems, we need to
leverage the advantages of both symbolic methods and connectionist ones in a
complementary way. This project uses a knowledge representation that would empower
connectionist methods like neural nets. Classes of objects like cats will be
represented by different features that might contribute in making a cat a cat. For
instance, a cat would be represented by basic features called primitive features like
`fur', `hind legs,' `pointy ears,' `tail' etc. Neural networks will be used to recognize
these high level features, instead of the object itself. This representation would
enable `generalization.' If for example, I train my system to recognize tigers and
leopards, then I will need a few examples of Ocelots for my system to start recognizing
ocelots. The reason is Ocelots are made of a linear combination of features of tigers
Research project with implementation.
Figure out what classes would be best to focus on and use to demonstrate the learning from a few examples feature. This is strongly tied with how to break down classes of objects into primitive features and what features to pick.
Find appropriate datasets and a smart way to tag them. Not all pictures of cats have hind legs. Some pictures of cats only include their heads. One way to deal with this problem is to find classes of objects with necessary primitive features, features without which they are unrecognizable. Also, find a way remove other features (noise) from the picture.
Train convolutional neural networks on these different features. Get a high accuracy.
Find a knowledge representation that represents how the different features fit together. Making sure the features fit together correctly is as important as making sure they are present in the picture.
Demonstrate that the system can learn an extra class based on a few examples, where this extra class is made of primitive features that the system can already recognize. If not, figure out what went wrong in the steps.
Merged a connectionist approach with a symbolic one, getting us closer to Minskys idea of the society of mind.
Built a system that exhibits learning from a few examples, assuming primitive features are already present.
Subjective judgments of perceptual and conceptual randomness
People have a peculiar perception of randomness: on one hand, it allows us to pick out statistical regularities and notable patterns from otherwise noisy input (e.g. identifying words in a spoken sentence, even in a noisy environment). On the other, it can be entirely irrational, causing us to see structure where there is none or develop incorrect and biased intuitions.
Why are our conscious judgments of randomness often so poorly aligned with reality, even though our perceptual systems have been trained to instinctively extract information from noise? Is our perception of randomness learned through experience? If we are to understand how people perceive structure in noise, we must understand how we interpret both perceptual (e.g. in visual/auditory stimuli) and conceptual (e.g. sequences of coin tosses) randomness.
Review existing literature on subjective randomness to determine what we currently know about the relationship between perceptual and conceptual randomness, and to what extent our perception is learned through experience with regularities in natural scenes.
Conduct an experiment with human subjects and ask them to judge whether stimuli / sequences are random.
Analyze the psychophysics of when subjects begin to perceive a random stimuli as structured, subjectively explore how subjects make such judgments, and attempt a hypothesis or model for how randomness is perceived.
Developed a methodology to relate perceptual (e.g. in visual/auditory stimuli) and conceptual (e.g. sequences of coin tosses) randomness.
Offered a hypothesis/model for why people perceive randomness the way they do, and how this ability (or lack thereof) allows them to interpret their environment.
Collaborative Robots Develop Language for Communication through Games
If we are to create truly autonomous and collaborative robots that operate in dynamic environments, then we must build robots that can understand and develop language based on world experience. This would improve the robot's ability to communicate with one another and their collaborative performance.
Research project with implementation (overall ambition)
Frame the problem and argument for why collaborative robots able to develop their own language are important.
Review current research on collaborative robots and communication methods between robots.
Develop an open program for communication where the agents are given a basic set of words and the ability to communicate. There will also be the ability to specify goals in future test cases. These goals will be used for reinforcement learning.
Test Case 1: Hide and Seek. (simulation) Assign one robot as the seeker agent, and all others as hider agents. The seeker's goal is to find all of the hiders as soon as possible and is rewarded higher the earlier it finds hiders. When a hide agent is found, it turns into a seeker agent and can earn points by finding hiders. The hiders are rewarded based on how long they stay hidden with the reward increasing over time. This game turns collaborative as soon as there are two seekers.
Test Case 2: Move something large by working together. (simulation) All agents are given the same task of moving a group of large objects over to another location, but most of the objects are too large to move alone. The agents are rewarded on how quickly they accomplish the task with larger objects carry more points. In order to complete the task, they must work together to move large objects.
Program physical robots with Hide and Seek implementation to evaluate real-world interaction and communication.
Evaluated current trends and research on robots that communicate and collaborate with one another.
Established a new trajectory for collaborative builder robots. This development would improve the robot's ability to work in dynamic and unpredictable environments.
Argued for builder robots that can collaborate together in order to complete a task too big for one to complete on its own. For example, if many robots were building something large and complex where most of the pieces were manageable for one robot, but sometimes the pieces required 2 or more to move them.
Built an open program for communication that could be applied to many test cases.
Tested the program through two test cases: Hide and Seek; Collaborative Move
The Effect of Expressive Gesturing on Memory
If we are to understand the role of our visual system in forming memories, then we need to
understand how human gesturing work increases our ability to recall the content later.
I will tell a short story to subjects. In my controlled experiment, I will use no gesturing when telling the story. In my test group I will use expressive gestures to describe the major components of my story.
After the story, the subjects will be required to answer questions about the content.
Questions will focus on gestured components of the story. These questions will vary in terms of difficulty in order to determine how much each subject actually remembers.
We will draw conclusions from this data about whether accurate gesturing plays an important role in memory recall.
Collected and analyzed data relevant to establishing whether gesturing aids recall.
Demonstrated that expressively using gestures aids a subject's story recall.
Constructive Criticism in Constructing Recipes
If we want to understand how humans construct stories, we need to understand how people understand and respond to feedback and use it to change the story they are telling.
I will observe how people give feedback in a kitchen setting during the creation of various recipes.
I will observe how the recipe creators respond to this feedback and edit the structure of their recipe.
I will create plausible and detailed example situations in Genesese (the language that can be understood by the Genesis system) of recipes, feedback to those recipes, and how the recipe changes as a result.
Eventually, Genesis will be used to compose recipes alongside the humans trying to innovate in the kitchen.
Collected and analyzed data relevant to the processes of giving and receiving feedback for a creative and subjective task.
Demonstrated the role of feedback in the authoring process of stories, with a lens on recipes as a type of story.
Augmented the Genesis system to aid in recipe creation and recipe improvement.
Internal and External Models of Self
Human are social beings. If we are to understand human intelligence, then we need to
understand how humans view and make inferences about each others personalities.
Subjects will be asked to self-report on traits of their personality.
In a series of timed interview, participants will meet with other participants, and through their interactions form notions of the characteristics of the counterparts personality.
Participants will then answer a questions about their perceptions of the other persons personality, and what they believe the other participants perceptions were of their personality.
Conclusions will be drawn from this data about how inferences are made about other personalities, and what, if any, are areas that peoples perceptions differ significantly from self-reports
Propose a method for how personality traits can be assessed
Suggest a theory of how humans form representations and make inferences about each others personalities.
Identify common points of dissonance between self and external representations of personality
Intelligent Appearing Videogame AI Using Subsumption Architecture
Most current videogame AI agents do not appear intelligent to players. I suggest
using Brooks' subsumption architecture for videogame AI agents to create the
appearance of reasonable intelligence.
Reimplementation and Pilot study
I will design the layers of the subsumption architecture for an AI agent
in a simple videogame.
I will implement the design in an existing open source game that already
contains an AI implementation.
An experiment will be performed where I have testers play a version of the
game with the original AI, and a version of the game with the subsumption AI.
Testers will first be asked which version they found more fun, and then
which version they thought had more intelligent AI enemy agents.
Demonstrated how subsumption architecture should be modified to be used in
a videogame AI agent rather than a physical robot.
Implemented subsumption architecture for a videogame AI agent
Compared subsumption videogame AI agent to a traditional AI agent in terms
of fun and appearance of intelligence.
Suggested what type of games would be best suited for subsumption based AI
Sailashri Parthasarathy and Katrine Tjoelsen
Human altruism: What happens when one has repeated opportunity to be altruistic?
In order to better understand what makes humans different from other animals, we must understand how humans use symbolic thinking to drive behavior. Humans reason logically with symbolic thinking in a way that other animals don?t. We focus on how humans use symbolic thinking to act altruistically, i.e. selflessly to benefit others. Our approach is to design and run a pilot study to illuminate an aspect of altruistic behavior in humans.
The word ?altruism? is used in many contexts, but here we limit ourself to the Merriam-Webster definition: ?[altruism is a] behavior by an animal that is not beneficial to or may be harmful to itself but that benefits others of its species?. Furthermore, we only consider ?altruism? in the context of cash donations given by a human subject to an unknown recipient.
We define what we mean by ?altruism? and ?altruistic.?
We discuss related work that describes human altruistic behavior and human symbolic thinking (Chomsky and Berwick).
We limit our scope to explore the scenario where a person donates money to a stranger but where a few details (such as the income) of the stranger recipient are known. We formulate the problem as follows:
A person owns a fixed amount of money, here $100k.
The person is presented with an opportunity to donate a voluntary amount X to a recipient. The person chooses the amount X they want to donate.
After donating an amount X, the person is presented with a second chance to donate a new voluntary amount Y. And finally, after donating Y, the person has a third chance to donate a voluntary amount Z. In total, the person has three opportunities to donate money. The person does not know initially how many opportunities there will be.
We design an experiment based on this problem. The experiment runs as follows:
We design and write a survey that describes the basic scenario: ?You have $100k. You may choose to donate an amount of your choice to another person who you do not know. How much would you donate?? along with two more rounds.
The survey subject plays the role of the donor.
We record how much the survey subject chooses to donate in each round.
We analyze the data and present a theory for the behavior of the donors.
Formulated a game of repeated opportunities to donate money.
Designed a pilot study for the game of repeated opportunities to donate money.
Collected experimental results from 20 people for the pilot study.
Found that people stop donating after two rounds because they feel like they have been ?sufficiently altruistic? already. Formulated a theory to describe donation behavior over repeated time steps.
Suri Bandler and Niki Tubacki
Action Based Metaphors in Story Understanding
If we are to understand how action-metaphors comparing two characters are understood in stories, then we need to understand both how readers derive character traits from actions and how they decide which character traits to apply from metaphors.
Two groups of subjects, Group 1 and Group 2, will be asked to read a made-up story, Story
A. Group 2, however, will receive a slightly modified version of Story A in which one
of the characters is compared to another character who appears in another made-up story,
Story B. Story B will only be provided to the subjects in Group 2. Character metaphors
will be action-based and therefore will be of the form: Claudius, like Macbeth, did
Subjects will be asked to answer questions regarding the
characteristics of characters in Story A.
We will analyze the results to
determine which traits are uniquely used by Group 2 in order to determine the traits
that were derived directly from the metaphor.
Finally, we will develop a strategy for applying our results to the Genesis program
to allow Genesis to understand action-metaphors in stories.
Collected and analyzed data relevant to establishing how
humans use metaphors to understand character traits.
character traits derived from characters actions increases (or does not increase) a
subject's story and character understanding.
Recommended a strategy for
expanding the Genesis program to include the understanding of action-metaphors in
Yan Hao Leon Shen
Universal Representational and Computational Substrate
In order to create systems capable of "understanding" everything and solving
any problem, we must create ways of representing and manipulating their
knowledge flexibly, integrating different types of knowledge and
different strategies for reasoning. Creating a universal substrate for both
representation and computation will allow us to achieve this flexibility by
capturing rich interconnections between different modalities of "thought".
Review past efforts to create flexible, general knowledge
representations, analysing their objectives, methods, and limitations.
Identify specific properties and characteristics that will need to be
addressed, and identify specific examples of problems requiring these.
Guided by these requirements, design a system that addresses them.
Implement it, and test it with the example problems previously
Analyse the limitations of the system and suggest future improvements.
Created a universal substrate for representation and computation
flexibly supporting different kinds of knowledge and methods of
Exposed rich interconnections between developments in many subfields of
artificial intelligence by integrating them with a common framework.
Addressed limitations of previous knowledge representations.