Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
wiki:projects:research:true_ai:devblog [2019/11/30 21:25] – jeff | wiki:projects:research:true_ai:devblog [2022/10/28 21:12] – jeff | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Dev Blog ====== | ====== Dev Blog ====== | ||
+ | |||
+ | ===== October 28, 2022 ===== | ||
+ | |||
+ | * papers titled “A Model for Self-Organization of Sensorimotor Function: The Spinal Monosynaptic Loop" and " | ||
+ | * [[https:// | ||
+ | |||
+ | ===== October 11, 2021 ===== | ||
+ | |||
+ | * They are not Spikes, they are sparks. | ||
+ | * in terms of self assembling structures | ||
+ | * and spiking neural networks | ||
+ | * they' | ||
+ | * the electricity builds up and then discharges. | ||
+ | |||
+ | ===== July 28, 2021 ===== | ||
+ | |||
+ | * along with self assembling structures | ||
+ | * Battery dendrite formation | ||
+ | * Simulating dendrite formation | ||
+ | |||
+ | ===== April 10, 2021 ===== | ||
+ | |||
+ | * self assembling stuctures | ||
+ | * youtube video self-assebmling wires | ||
+ | * Dr. Hubler | ||
+ | * This behavior solves the node connection problem | ||
+ | |||
+ | ===== March 20, 2021 ===== | ||
+ | |||
+ | * I first tried to figure out gravity. | ||
+ | * I didn't have much understanding of electronics or physics at the time and didn't have path forward in solving such a problem | ||
+ | * Then I wasted some time trying to figure out infinite energy. | ||
+ | * I then got a job doing natural language processing, which is machine learning or Artificial intelligence. | ||
+ | * Once I started grasping this world, it all coalesced. | ||
+ | * I've been studying human behavior, i.e. psychology, because I don't understand people | ||
+ | * But I've also been studying computer programming. | ||
+ | * When you merge these 2 together, you get Artificial Intelligence. | ||
+ | * And I started thinking about how the human brain works and how someone could recreate the human brain in a computer. | ||
+ | * I had an idea on how to solve this. | ||
+ | * I spent the next 15 years trying to determine an algorithm that would recreate this behavior | ||
+ | * in 2015, I realized exactly what algorithm would behave like the human brain. | ||
+ | * Since 2015, I started trying to figure out how to fund the effort of taking this algorithm and turning it into a full product. | ||
+ | * at the end of 2020, I succeeded in creating the funds needed to pursue this algorithm full time. | ||
+ | * Now, I have been thinking about how this algorithm could help people. | ||
+ | * It can be used to automate production. | ||
+ | * It could be used to subvert the governments of the world and make societies better for the common person. | ||
+ | * It could be used to generate even more money, but if it automates the means of production, money would be meaningless. | ||
+ | |||
+ | |||
+ | ===== February 22, 2021 ===== | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | * A normal(25% of people) person will have both and inner dialog with their logical brain and with their emotional brain. | ||
+ | * A person with Asperger' | ||
+ | * apparently there are people with only the emotional dialog, probably highly empathetic people. | ||
+ | |||
+ | * Do you have an emotional inner dialog? | ||
+ | * Do you have a logical inner dialog? | ||
+ | * Do you have both that argue with each other? | ||
+ | * Do you have neither, and only feel emotions? | ||
+ | |||
+ | ===== October 10, 2020 ===== | ||
+ | |||
+ | Dynamic population encoding algorithm | ||
+ | | ||
+ | We believe that humans map inputs to actions. In order to do this optimally some action in the distant past has to have an effect on the present actions. This was handled by nature by giving some means to divert the input signal and have the diverted signal influence the action at a later stage. | ||
+ | This was also sufficient to make this a function, because similar states could now cause widely different actions because the action was conditioned not only on the current state, but also on the diverted input signal. | ||
+ | Also, within the brain, which was made with the sole purpose of diverting the signal, subnetworks with strong local connectivity within themselves would cause different effects to other networks (even those with other networks in between) at different times by diverting the signal they received, through the rest of the networks, before affecting the network they were trying to effect. This is true pairwise between every network in the brain. | ||
+ | The neural model we propose consists of a networked cellular automaton, where unlike in a traditional cellular automaton where physical boundaries determine neighbours, neighbouring cells could be “further” apart. What this means is that it will be necessary to keep an adjacency matrix that shows which neurons are connected to which. | ||
+ | The network will be initialized randomly with 2 % connectivity throughout the network in order to simulate sparse codes.Then spectral graph analysis will be used to determine which segments the network can be broken into most logically. An agent A will be assigned to each subnetwork, and its set of actions | ||
+ | The rapid spanning tree algorithm is used in cisco routers in order to prevent loops within a network. It does so by determining which connections should be blocked and which maintained. we will use the RSTA in order to adjust the topology of each of the subnetworks. Within RSTA switches exchange information in order to vote for a root bridge/ | ||
+ | instead the node that the agent A associated with the subnetwork is currently positioned on will be chosen as the root node. Note that the agent A is free to hop from node to node, even if they are not directly connected. Once the root node is chosen , the topology of the subnet is adjusted in line with this information. Our aim is to get each agent A to maximise its own reward by indirectly changing the topology of the network and influencing yet another network, which will act as the environment of the agent. | ||
+ | Each agent A will be associated with two networks. the network that it is hoping from node to node in, and one of the other networks. This secondary network will act as a part of the environment for the agent and will give it its reward. In a robotic system, once the total reward is calculated we would like a way to use it to influence which rewards a particular network feeds its paired agent A. this will be done by having each network keep a Q table with the action being increasing or decreasing the reward that the network gives out and the state being the global reward. | ||
+ | This means in short that each network has two associated agents, A and B. One A that hops from node to node changing the topology of the network. And one B that calculates the reward it should emit to the diametrically opposed agent from the paired network. The dynamics of the network would have each network’s agent A figure out a way to maximise its total personal reward by changing the topology of its network, and causing a ripple effect that will modulate the network that is diametrically opposite it. | ||
+ | This reward is calculated by the second network' | ||
+ | One of the other networks in the model will act as a teacher and have mirrored neurons for each of the output neurons mentioned. Once the neural network has received some input and predicted the particular cells that it does. The difference between the signal received by the mirrored neurons and the neural networks output cells will be used to come up with a loss value in which to train the neural networks. | ||
+ | |||
+ | ===== July 10, 2020 ===== | ||
+ | |||
+ | * It creates multiple connections then drops around 30% of those connections : ref - Workshop on continual learning | CVPR 2020 | subutai ahmad | ||
+ | |||
+ | ===== January 12, 2020 ===== | ||
+ | |||
+ | * Input - [[https:// | ||
+ | * Node creation - paper about neuro plasticity | ||
+ | * Output Discovery - My own work in watching the neuron algorithm | ||
+ | * Feedback loop - [[https:// | ||
+ | |||
+ | ===== December 22, 2019 ===== | ||
+ | |||
+ | * [[https:// | ||
+ | |||
+ | ===== December 7, 2019 ===== | ||
+ | |||
+ | * another potential market - [[https:// | ||
===== November 30, 2019 ===== | ===== November 30, 2019 ===== |