TransWikia.com

Knowledge Graph as an input to a neural network

Data Science Asked by anascmidt on August 24, 2021

I want to create a neural network that takes as an input a knolwedge subgraph(different types of nodes and different types of edges) to predict some properties. For instance an input in the graph can be :

:Mike :likes :chocolate;
      :studies :Biology.

Assuming we have n nodes and r relation types. How can I provide this as an input in a neural network and how will the architecture of the neural network look like.

One Answer

You can represent this type of knowledge graph as a binary $n times r times n$ tensor. (You can think of this as a 3D matrix if it helps.)

The first dimension is for the node on the left side of the relationship, the second dimension is for the relation type, and the third dimension is for the node on the right side of the relationship. Then you can represent any relationship between two nodes by inserting ones in the correct indices.

To demonstrate, suppose we have 4 nodes: Mike at index 0, Sully at index 1, chocolate at index 2, and Biology at index 3.

And suppose we have 2 relations: likes (index 0), studies (index 1).

We would first create a $4 times 2 times 4$ tensor filled with zeros:

import torch

graph = torch.zeros([4, 2, 4], dtype=torch.bool)

""" Output:
tensor([[[False, False, False, False],
         [False, False, False, False]],

        [[False, False, False, False],
         [False, False, False, False]],

        [[False, False, False, False],
         [False, False, False, False]],

        [[False, False, False, False],
         [False, False, False, False]]])
"""

To represent the relationship :Mike :likes :chocolate; we would insert a one at the position [0, 0, 2] (0 for Mike, 0 for likes, 2 for chocolate).

To represent :Mike :studies :Biology, we have a one at [0, 1, 3].

# :Mike :likes :chocolate
graph[0, 0, 2] = 1

# :Mike :studies :Biology
graph[0, 1, 3] = 1

""" graph:
tensor([[[False, False,  True, False],
         [False, False, False,  True]],

        [[False, False, False, False],
         [False, False, False, False]],

        [[False, False, False, False],
         [False, False, False, False]],

        [[False, False, False, False],
         [False, False, False, False]]])
"""

Of course we can represent any relationship between 2 nodes. Let's add a few more:

# Mike and Sully are friends

# :Sully :likes :Mike
graph[1, 0, 0] = 1

# :Mike :likes :Sully
graph[0, 0, 1] = 1

# I suppose chocolatogoly is a subject in Biology
# :Biology :studies :chocolate
graph[3, 1, 2] = 1

Concerning architecture, there are at least two viable ways to feed this representation to a neural network. You could flatten the whole graph and treat it like a 1-dimensional input. Another option is to retain the 3 dimensions and use convolutional layers (or something else) to extract features (3D convolution works just like 2D convolution).

As for the rest of the architecture, that's up to you!

Answered by zachdj on August 24, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP