Skip to content
Snippets Groups Projects
Select Git revision
  • 9b43d300c32807840b1be8d8586d06366c09d8cf
  • master default protected
2 results

mindBody.md

Blame
  • Shape and Control Optimization

    Research and development of workflows for the co-design reconfigurable AI software and hardware.


    Weight Agnostic Neural Networks (WANN)

    Introduction

    • Paper
      • "focus on finding minimal architectures".
      • "By deemphasizing learning of weight parameters, we encourage the agent instead to develop ever-growing networks that can encode acquired skills based on its interactions with the environment".


    1- Frep Search

    As a first step to understand the code and WANN training, I implemented a toy problem where I am trying to learn the functional representation of target image, which is, given the x,y position of every pixel in an image, try to find the distance function that represent how far is this pixel from the edge of the shape.

    In the following training the target shape is a circle, and the input is the x and y positions of the pixels, and it found a minimal neural network architecture (given a library of given non linear functions) that maps the input position into the target shape.

    Graph Evolution:

    Target Evolution:


    2 - Walker

    • Variables
      • Modularity and Hierarchy
        • Number of Variables
          • Similar to rover, I can explore the benefit of hierarchy and modularity by taking advantage of the symmetry and hierarchy in the problem
          • Each leg has 2 independent degrees of freedom
    • Objective function
      • this problem will be a bit harder as it has to learn ro stand first then walk
      • I can try to make it easier by restraining some degrees of freedom
      • I want to explore if the graphs of both behaviors are related or not

    Progress and Results:

    • Integration:
      • Done with integrating WANN with MetaVoxels, only need to load libraries/functions once and then call them each time I do a simulation in parallel
    • Training
      • 4 independent variables
      • no looping
    • Objective function:
      • any voxel with maximum x
    • Training Results: