• neural nets adapted to leverage structure and properties of graphs → explore components needed for building graph neural network + motivated by design choices behind the neural nets

    • ie. each node follows a path, given its activation, across the network which outlines a probabalistic representation for the data through the network + given the output task
  • graphs = conditions of connections of different things → naturally expressed as graphs

    • GNNs are NNs that operate over graphs [convolutions over graphs] → recent developments have increased size and expressive power of GNNs
      • EG: antibacterial discovery, physics simulations, fake news detection, traffic predicition, recommender systems, and much more
  • def: graph → represents relations (edges) between collection of bodies/objects (nodes)

    • Graphs: V = nodes, E = edges, U = global attributes [# of V&E, possible traversals]

    • Neural Networks: V = node embedding (activation), E = weight links, U = network arch.

      Screen Shot 2021-10-06 at 10.20.38 AM.png

  • graphs can be specialized using directions (undirected means it can go both ways, directed means it is 1 way [neural network basic is a directed graph]

  • symmetries and correlations between data can be represented as graphs [i.e. words in sentnece, image in pixels]

  • images as graphs → instead of pixels with channels, make every node an activation which represents pixel brightness, and then determine edges and shape of graph using an adjancey matrix

    • same can be done with text where words can be digitized with token representations of the words → creates a simple directed graph where each word in a sentencee is a node and connected via an edge to the node that follows it [NLP rep]
  • *these representations of a standard image → standard graph not useful [redundant] + graphs embed struture

  • molecules as graphs, atoms in space, types of bonds, pairing, electronegativity, partial charge, strength of bond → all can be embedded in a given graph