pgl.layers: Predefined graph neural networks layers.

Generate layers api

pgl.layers.gcn(gw, feature, hidden_size, activation, name, norm=None)[source]

Implementation of graph convolutional neural networks (GCN)

This is an implementation of the paper SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS (https://arxiv.org/pdf/1609.02907.pdf).

Parameters
  • gw – Graph wrapper object (StaticGraphWrapper or GraphWrapper)

  • feature – A tensor with shape (num_nodes, feature_size).

  • hidden_size – The hidden size for gcn.

  • activation – The activation for the output.

  • name – Gcn layer names.

  • norm – If norm is not None, then the feature will be normalized. Norm must be tensor with shape (num_nodes,) and dtype float32.

Returns

A tensor with shape (num_nodes, hidden_size)

pgl.layers.gat(gw, feature, hidden_size, activation, name, num_heads=8, feat_drop=0.6, attn_drop=0.6, is_test=False)[source]

Implementation of graph attention networks (GAT)

This is an implementation of the paper GRAPH ATTENTION NETWORKS (https://arxiv.org/abs/1710.10903).

Parameters
  • gw – Graph wrapper object (StaticGraphWrapper or GraphWrapper)

  • feature – A tensor with shape (num_nodes, feature_size).

  • hidden_size – The hidden size for gat.

  • activation – The activation for the output.

  • name – Gat layer names.

  • num_heads – The head number in gat.

  • feat_drop – Dropout rate for feature.

  • attn_drop – Dropout rate for attention.

  • is_test – Whether in test phrase.

Returns

A tensor with shape (num_nodes, hidden_size * num_heads)

pgl.layers.gin(gw, feature, hidden_size, activation, name, init_eps=0.0, train_eps=False)[source]

Implementation of Graph Isomorphism Network (GIN) layer.

This is an implementation of the paper How Powerful are Graph Neural Networks? (https://arxiv.org/pdf/1810.00826.pdf).

In their implementation, all MLPs have 2 layers. Batch normalization is applied on every hidden layer.

Parameters
  • gw – Graph wrapper object (StaticGraphWrapper or GraphWrapper)

  • feature – A tensor with shape (num_nodes, feature_size).

  • name – GIN layer names.

  • hidden_size – The hidden size for gin.

  • activation – The activation for the output.

  • init_eps – float, optional Initial \(\epsilon\) value, default is 0.

  • train_eps – bool, optional if True, \(\epsilon\) will be a learnable parameter.

Returns

A tensor with shape (num_nodes, hidden_size).

class pgl.layers.Set2Set(input_dim, n_iters, n_layers)[source]

Bases: object

Implementation of set2set pooling operator.

This is an implementation of the paper ORDER MATTERS: SEQUENCE TO SEQUENCE FOR SETS (https://arxiv.org/pdf/1511.06391.pdf).

forward(feat)[source]
Parameters

feat – input feature with shape [batch, n_edges, dim].

Returns

output feature of set2set pooling with shape [batch, 2*dim].

Return type

output_feat

pgl.layers.graph_pooling(gw, node_feat, pool_type)[source]

Implementation of graph pooling

This is an implementation of graph pooling

Parameters
  • gw – Graph wrapper object (StaticGraphWrapper or GraphWrapper)

  • node_feat – A tensor with shape (num_nodes, feature_size).

  • pool_type – The type of pooling (“sum”, “average” , “min”)

Returns

A tensor with shape (num_graph, hidden_size)

pgl.layers.graph_norm(gw, feature)[source]

Implementation of graph normalization

Reference Paper: BENCHMARKING GRAPH NEURAL NETWORKS

Each node features is divied by sqrt(num_nodes) per graphs.

Parameters
  • gw – Graph wrapper object (StaticGraphWrapper or GraphWrapper)

  • feature – A tensor with shape (num_nodes, hidden_size)

Returns

A tensor with shape (num_nodes, hidden_size)