Amazon Neptune Jupyter Notebooks with persistence via EFS

Neptune Notebooks allow you to easily populate and query your Amazon Neptune graph database in an interactive way using Jupyter Notebooks. This post describes how to set up a Notebook Instance layer and a persistence layer with EFS in AWS CDK. This allows you to delete and recreate the Notebook Instance while preserving your notebooks in EFS. Moreover, you can share the file system across many Notebook Instances.

CDK project

I present here only the relevant constructs. This code is part of a CDK project written in TypeScript that can be found in this github repository. If you found this article because you are implementing this architecture but you are working with CloudFormation, the code should still be relevant, it is quite readable and easy to translate to CloudFormation.

The two constructs below will need to be part of 2 different layers/stacks, as these stacks will have different lifecycles.

  • The Persistence Layer contains your EFS file system, and will only be deployed once and not deleted

  • The Notebook Instance Layer can be created for only the time you use the notebook instance and then deleted. The beauty of infrastructure as code is that you can recreate it with a single command. You can also create several instances if you wish, as EFS can be attached to multiple instances and act as a shared storage.

Notebook Instance Persistence Construct

This is the construct that creates the persistence layer. Some important points:

  • You will need to have a VPC already. The Elastic File System will be launched within that VPC.

  • Access is controlled via Security Groups, I am creating one for the EFS file system and one that can be used by clients connecting to the file system.

  • I store the client security group and the file system ID in public properties of the construct so that they can be injected into the notebook instance construct later on. See the github repo for more details of how this is done.

import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as efs from '@aws-cdk/aws-efs';
import { DeploymentConfig } from '../config/deployment-config';
import { Constants } from '../constants/constants';

export interface NeptuneNotebookPersistenceProps {
  readonly deployment: DeploymentConfig;
  readonly vpc: ec2.Vpc;
  readonly encrypted: boolean;
  readonly enableAutomaticBackups: boolean;

export class NeptuneNotebookPersistence extends cdk.Construct {
  public efsClientSecurityGroup: ec2.SecurityGroup;
  public efsFileSystemId: string;

  constructor(scope: cdk.Construct, id: string, props: NeptuneNotebookPersistenceProps) {
    super(scope, id);

    const efsClientSecurityGroup = new ec2.SecurityGroup(this, 'efs-client-sg', {
      vpc: props.vpc,
      securityGroupName: `${props.deployment.Prefix}-neptune-notebook-efs-client`,
      description: `Security group for Neptune Notebook EFS clients for project ${props.deployment.Project} in ${props.deployment.Environment}`,

    const efsSecurityGroup = new ec2.SecurityGroup(this, 'efs-sg', {
      vpc: props.vpc,
      securityGroupName: `${props.deployment.Prefix}-neptune-notebook-efs`,
      description: `Security group for Neptune Notebook EFS for project ${props.deployment.Project} in ${props.deployment.Environment}`,
      'EFS port');

    const fileSystem = new efs.FileSystem(this, 'file-system', {
      fileSystemName: `${props.deployment.Prefix}-neptune-notebook-efs`,
      vpc: props.vpc,
      vpcSubnets: props.vpc.selectSubnets({
        subnetType: ec2.SubnetType.PRIVATE
      securityGroup: efsSecurityGroup,
      performanceMode: efs.PerformanceMode.GENERAL_PURPOSE,
      encrypted: props.encrypted,
      enableAutomaticBackups: props.enableAutomaticBackups,
      removalPolicy: cdk.RemovalPolicy.DESTROY

    this.efsClientSecurityGroup = efsClientSecurityGroup;
    this.efsFileSystemId = fileSystem.fileSystemId

I have extracted some constants in the following class

export class Constants {
  static get NEPTUNE_PORT(): number {
    return 8182;
  static get EFS_PORT(): number {
    return 2049;

while the DeploymentConfig interface contains several properties, but only one is relevant here and this is the project name that I use as the prefix for all resources.

export interface DeploymentConfig
    readonly Project: string;

This allows me to create several stacks side by side without naming conflicts. This is explained in more detail in this post.

Notebook Instance Construct

This is the construct that creates the Neptune Notebook Instance. Some important points here:

  • The Database Cluster is created in the same project and is injected in the props of the construct together with other information

  • A Neptune notebook instance is actually a SageMaker notebook instance, but there are some naming conventions that makes it appear in the Neptune UI.

  • The name of the notebook instance needs to start with aws-neptune-.

  • The notebook instances need to be tagged with tags aws-neptune-cluster-id and aws-neptune-resource-id.

  • I am using a notebook instance lifecycle configuration script to mount the EFS volume (see below for the script if you are looking for just this script).

  • The notebook instance needs the right permissions to fetch AWS-provided notebooks from S3, connect to your DB cluster and write logs to CloudWatch.

import * as cdk from '@aws-cdk/core';
import * as ec2 from '@aws-cdk/aws-ec2';
import * as iam from '@aws-cdk/aws-iam';
import * as neptune from '@aws-cdk/aws-neptune';
import * as sagemaker from '@aws-cdk/aws-sagemaker';
import { DeploymentConfig } from '../config/deployment-config';
import { Constants } from '../constants/constants';
import { ServicePrincipals } from '../constants/service-principals';
import { NeptuneNotebookConfig } from '../config/sections/neptune-notebook';

export interface NeptuneNotebookProps {
  readonly deployment: DeploymentConfig;
  readonly neptuneNotebookConfig: NeptuneNotebookConfig;
  readonly vpc: ec2.Vpc;
  readonly neptuneCluster: neptune.DatabaseCluster;
  readonly databaseClientSecurityGroup: ec2.SecurityGroup;
  readonly efsClientSecurityGroup: ec2.SecurityGroup;
  readonly efsFileSystemId: string;

export class NeptuneNotebook extends cdk.Construct {
  private readonly props: NeptuneNotebookProps;

  constructor(scope: cdk.Construct, id: string, props: NeptuneNotebookProps) {
    super(scope, id);

    this.props = props;

    const notebookRole = this.defineNotebookRole();

    const lifecycleConfigName = `${this.props.deployment.Prefix}-notebook-instance-lifecycle-config`;

    this.defineNotebookInstance(notebookRole, lifecycleConfigName);

  private defineNotebookRole(): iam.Role {
    const role = new iam.Role(this, 'notebook-role', {
      roleName: `${this.props.deployment.Prefix}-neptune-notebook-role`,
      assumedBy: new iam.ServicePrincipal(ServicePrincipals.SAGEMAKER)

    role.addToPolicy(new iam.PolicyStatement({
      effect: iam.Effect.ALLOW,
      actions: [
      resources: [

    role.addToPolicy(new iam.PolicyStatement({
      effect: iam.Effect.ALLOW,
      actions: ['neptune-db:connect'],
      resources: [`arn:aws:neptune-db:${cdk.Aws.REGION}:${cdk.Aws.ACCOUNT_ID}:${this.props.neptuneCluster.clusterResourceIdentifier}/*`]

    role.addToPolicy(new iam.PolicyStatement({
      effect: iam.Effect.ALLOW,
      actions: [
      resources: [`arn:aws:logs:${cdk.Aws.REGION}:${cdk.Aws.ACCOUNT_ID}:log-group:/aws/sagemaker/NotebookInstances:*`]

    return role;

  private defineNotebookInstanceLifecycleConfig(name: string): sagemaker.CfnNotebookInstanceLifecycleConfig {
    const persistentPath = `/home/ec2-user/SageMaker/${this.props.neptuneNotebookConfig.PersistentDirectory}`;
    const efsDns = `${this.props.efsFileSystemId}.efs.${cdk.Aws.REGION}`
    const lifecycleConfig = new sagemaker.CfnNotebookInstanceLifecycleConfig(this, 'notebook-instance-lifecycle-config', {
      notebookInstanceLifecycleConfigName: `${this.props.deployment.Prefix}-notebook-instance-lifecycle-config`,
      onCreate: [{
        content: cdk.Fn.base64(
set -e
mkdir ${persistentPath}`)
      onStart: [{
        content: cdk.Fn.base64(
set -e
sudo -u ec2-user -i <<'EOF'
echo "export GRAPH_NOTEBOOK_AUTH_MODE=DEFAULT" >> ~/.bashrc
echo "export GRAPH_NOTEBOOK_HOST=${this.props.neptuneCluster.clusterEndpoint.hostname}" >> ~/.bashrc
echo "export GRAPH_NOTEBOOK_PORT=${Constants.NEPTUNE_PORT}" >> ~/.bashrc
echo "export NEPTUNE_LOAD_FROM_S3_ROLE_ARN=''" >> ~/.bashrc
echo "export AWS_REGION=${cdk.Aws.REGION}" >> ~/.bashrc
aws s3 cp s3://aws-neptune-notebook/graph_notebook.tar.gz /tmp/graph_notebook.tar.gz
rm -rf /tmp/graph_notebook
tar -zxvf /tmp/graph_notebook.tar.gz -C /tmp
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=120,retrans=2 ${efsDns}:/ ${persistentPath}
chmod go+rw ${persistentPath}`)
    return lifecycleConfig

  private defineNotebookInstance(
    role: iam.Role, 
    lifecycleConfigName: string): sagemaker.CfnNotebookInstance {
      const notebookInstance = new sagemaker.CfnNotebookInstance(this, 'notebook-instance', {
        // Name has to start with 'aws-neptune-'
        notebookInstanceName: `aws-neptune-${this.props.deployment.Prefix}-neptune-notebook-instance`,
        instanceType: this.props.neptuneNotebookConfig.InstanceType,
        roleArn: role.roleArn,
        lifecycleConfigName: lifecycleConfigName,
        rootAccess: 'Enabled',
        subnetId: this.props.vpc.privateSubnets[0].subnetId,
        tags: [
          new cdk.Tag('aws-neptune-cluster-id', this.props.neptuneCluster.clusterIdentifier),
          new cdk.Tag('aws-neptune-resource-id', this.props.neptuneCluster.clusterResourceIdentifier)
      return notebookInstance;

The relevant Service Principal is the following

export class ServicePrincipals {
  static get SAGEMAKER(): string {
    return '';


Practising infrastructure as code and using AWS CDK, we have created an EFS file system and mounted it to a SageMaker Notebook Instance configured in a way that will make it a Neptune Notebook Instance. The EFS file system acts as persistence so that stored notebooks will be preserved when the notebook instance is deleted and recreated. It also allows to create a shared file system for several notebook instances to access the notebooks stored in it.