Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
KostyaCholak committed Oct 28, 2023
1 parent d5f882c commit 8f2f396
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/pattern_matching.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ nav_order: "60"
# Pattern Matching

## Why It Is Needed
People think about the world in terms of abstract concepts because it substatually simplifies interractions with the real world. What this means is that me map real world onto our world model by simplifying and aggregating incoming signals from the sensory organs.
People think about the world in terms of abstract concepts because it substantially simplifies interactions with the real world. What this means is that me map real world onto our world model by simplifying and aggregating incoming signals from the sensory organs.
This process of simplification is called pattern matching because the agent is looking for abstract concepts in the raw signals, trying to find known patterns in the data. If we didn't have pattern matching, then we would need to work with the raw data instead of its simplified representation, which is a lot more resource-intensive.

## Ambiguity
Expand Down
2 changes: 1 addition & 1 deletion docs/world_model/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ has_children: true
To effectively process incoming events, generate novel knowledge, and reason about the environment, an agent requires a robust internal representation of the world. This section describes data structures used to describe the internal representation of the world and motivation for those structures.

## Abstract Representation
World model represent well-bounded representations of a physical world, which is oftern hard to bound. For example a real-world collection of particles people would call "a puddle" does not have clear boundaries - water is constantly evaporating and condensing, it goes through the ground and mixes with it. But in people's head it's a single entity that they can reason about. The same approach was used in this framework - entities resemble a class instance from Object-Oriented Programming (OOP).
World model represent well-bounded representations of a physical world, which is often hard to bound. For example a real-world collection of particles people would call "a puddle" does not have clear boundaries - water is constantly evaporating and condensing, it goes through the ground and mixes with it. But in people's head it's a single entity that they can reason about. The same approach was used in this framework - entities resemble a class instance from Object-Oriented Programming (OOP).


6 changes: 3 additions & 3 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ nav_order: "10"

The AI landscape has been notably transformed by the emergence of Large Language Models (LLMs). Their rise has undeniably brought us closer to the Artificial General Intelligence (AGI). Yet some challenges that are intrinsic to LLMs make us skeptical if they are the best way to reach AGI.

### 1.1 Controlling Probablity
### 1.1 Controlling Probability

One of the nuances of LLMs lies in their control mechanics. While minor calibrations can be made to influence their outcomes, achieving a comprehensive grasp over their intricate functionalities is challenging. For instance, integrating a component as fundamental as long-term memory requires more than just a superficial adjustment. Existing methodologies like the RAG do offer potential solutions, but they are inherently limited and don't genuinely emulate organic memory dynamics. Altering the model's architecture is an option, but it mandates a comprehensive retraining process. And before we even delve into the complexities of alignment, it's evident that mastering the technical aspects alone presents substantial hurdles.

### 1.2 Trainig Cost
### 1.2 Training Cost

Starting the training process of an LLM from the ground up is a resource-intensive endeavor. This doesn't just impact the exploration of newer architectural innovations but also presents a barrier to entry for independent and smaller-scale researchers. Hence, while LLMs present a tantalizing direction toward AGI, they also raise questions about access, equity, and diversity in research.

Expand All @@ -29,7 +29,7 @@ One of the historic challenges algorithmic models grappled with was ambiguity. P

### 2.2 Conceptual Processing and Learning

Our framework is based on the new pattern-matching techniques that can handle ambiguity. We process raw data (like text, audio, images) into a conceptual representation. It can then trigger different actions that are associated with the concept that an agent was able to find in the raw data. For example a question might trigger an action of thinking about an answer and then telling it. But the agent's learning journey doesn't end there. By employing pattern matching on its subsequent actions and the feedback it garners from the environment, the agent is able to understand its influence on its surroundings, continiously building it's world model. This is made possible as both actions and outcomes are represented as graphs, allowing for a deeper, graph-based pattern recognition. The use of pattern-matching on the agent's actions introduces an additional layer of non-liniarity. Now, the agent's behavior isn't solely influenced by environmental input but also by its own potential responses to such input. Consider a scenario where the agent contemplates a potentially dangerous action; the pattern-matching mechanism identifies the associated concept of danger (from my actions), that concept will trigger an action to question the appropriateness of its intended action.
Our framework is based on the new pattern-matching techniques that can handle ambiguity. We process raw data (like text, audio, images) into a conceptual representation. It can then trigger different actions that are associated with the concept that an agent was able to find in the raw data. For example a question might trigger an action of thinking about an answer and then telling it. But the agent's learning journey doesn't end there. By employing pattern matching on its subsequent actions and the feedback it garners from the environment, the agent is able to understand its influence on its surroundings, continuously building it's world model. This is made possible as both actions and outcomes are represented as graphs, allowing for a deeper, graph-based pattern recognition. The use of pattern-matching on the agent's actions introduces an additional layer of non-linearity. Now, the agent's behavior isn't solely influenced by environmental input but also by its own potential responses to such input. Consider a scenario where the agent contemplates a potentially dangerous action; the pattern-matching mechanism identifies the associated concept of danger (from my actions), that concept will trigger an action to question the appropriateness of its intended action.

While this framework is ambitious and teeming with complexities, it underscores the potential pathways we could explore as we inch closer to AGI.

Expand Down

0 comments on commit 8f2f396

Please sign in to comment.