Skip to content

Operations

Graph analytics operations.

cypher ¤

cypher(bundle: Bundle, *, query: LongStr, save_as: str = 'result')

Run a Cypher query on the graph in the bundle. Save the results as a new DataFrame.

export_to_file ¤

export_to_file(bundle: Bundle, *, table_name: str, filename: str, file_format: FileFormat = csv)

Exports a DataFrame to a file.

PARAMETER DESCRIPTION
bundle

The bundle containing the DataFrame to export.

TYPE: Bundle

table_name

The name of the DataFrame in the bundle to export.

TYPE: str

filename

The name of the file to export to.

TYPE: str

file_format

The format of the file to export to. Defaults to CSV.

TYPE: FileFormat DEFAULT: csv

import_csv ¤

import_csv(*, filename: str, columns: str = '<from file>', separator: str = '<auto>')

Imports a CSV file.

import_file ¤

import_file(*, file_path: str, table_name: str, file_format: FileFormat = csv, **kwargs) -> Bundle

Read the contents of the a file into a Bundle.

PARAMETER DESCRIPTION
file_path

Path to the file to import.

TYPE: str

table_name

Name to use for identifying the table in the bundle.

TYPE: str

file_format

Format of the file. Has to be one of the values in the FileFormat enum.

TYPE: FileFormat DEFAULT: csv

RETURNS DESCRIPTION
Bundle

Bundle with a single table with the contents of the file.

TYPE: Bundle

import_graphml ¤

import_graphml(*, filename: str)

Imports a GraphML file.

import_parquet ¤

import_parquet(*, filename: str)

Imports a Parquet file.

organize ¤

organize(bundles: list[Bundle], *, relations: str = '')

Merge multiple inputs and construct graphs from the tables.

To create a graph, import tables for edges and nodes, and combine them in this operation.

sample_graph ¤

sample_graph(graph: Graph, *, nodes: int = 100)

Takes a (preferably connected) subgraph.

sql ¤

sql(bundle: Bundle, *, query: LongStr, save_as: str = 'result')

Run a SQL query on the DataFrames in the bundle. Save the results as a new DataFrame.

Operations for machine learning.

define_model ¤

define_model(bundle: Bundle, *, model_workspace: str, save_as: str = 'model')

Trains the selected model on the selected dataset. Most training parameters are set in the model definition.

model_inference ¤

model_inference(bundle: Bundle, *, model_name: PyTorchModelName = 'model', input_mapping: ModelInferenceInputMapping, output_mapping: ModelOutputMapping, batch_size: int = 1)

Executes a trained model.

train_model ¤

train_model(bundle: Bundle, *, model_name: PyTorchModelName = 'model', input_mapping: ModelTrainingInputMapping, epochs: int = 1, batch_size: int = 1)

Trains the selected model on the selected dataset. Training parameters specific to the model are set in the model definition, while parameters specific to the hardware environment and dataset are set here.

train_test_split ¤

train_test_split(bundle: Bundle, *, table_name: TableName, test_ratio: float = 0.1, seed=1234)

Splits a dataframe in the bundle into separate "_train" and "_test" dataframes.

Automatically wraps all NetworkX functions as LynxKite operations.