titan Simple serverless deployments for Data Science


alt text


Titan is serverless deployment and service orchestration engine for Data Science and Machine Learning projects.

Here you will find comprehensive information to start using Titan in order to easily deploy and scale AI/ML (Artificial Intelligence / Machine Learning) models.

Why Titan?

At Akoios we build technology to help data scientists in their day-to-day tasks.

Using Titan, our flagship product, data scientists can forget about the complexity of AI/ML DevOps workflows, and focus in what is really important: data and models.

Titan automagically transforms models into services.

alt text

How does Titan work?

Titan seamlessly transforms all type of models into scalable and ready-to-use REST APIs with one single command.

alt text

Once the model has been transformed into a service, it can be easily integrated with any software of your choice: ERPs, CRMs, Websites…

This allows Data Scientists to easily iterate and integrate their models into the business processes of their companies.

Titan tools

Titan comes with a handy set of tools to facilitate data science workflows:

Apart from this, Titan comes with several features to boost the productivity in AI/ML projects:

  • Model versioning
  • Team management
  • Usage tracking
  • Monitoring of arbitrary model metrics



Open the PowerShell terminal (CTRL+R and type powershell) and run:

iwr -useb install.akoios.com/beta/w | iex

The installer requires PowerShell 5+ and having .NET framework installed. If you system does not satify the requirements, try with the manual installation.

Alternative manual installation

  1. Download titan.exe.

  2. Open a PowerShell terminal: CTRL+R and type powershell.

  3. Run the following command:

    move "${Env:USERPROFILE}\Downloads\titan.exe" "${Env:LOCALAPPDATA}\Microsoft\WindowsApps"
  4. Run titan help to get started.

Mac OSX / Linux

The quickest way to install titan is by running the following command:

curl -sf https://install.akoios.com/beta | sh

Alternatively, in case you do not have curl installed:

wget -qO - https://install.akoios.com/beta | sh

By default Titan is installed to /usr/local/bin, to specify a directory use BINDIR, this can be useful in CI where you may not have access to /usr/local/bin. Here’s an example installing to the current directory:

curl -sf https://install.akoios.com/beta | BINDIR=. sh

You can also install Titan by downloading the binary from different platforms from install.akoios.com

After downloading it run the following commands:

chmod +x titan
sudo mv titan /usr/local/bin

Verify the installation

You can verify the installation with:

titan help


Later when you want to update titan to the latest version simply use the following command:

titan upgrade


If you hit permission issues, you may need to run the following, as titan is installed to /usr/local/bin/titan by default.

sudo chown -R $(whoami) /usr/local/bin/

Preparing the models

Hardware and runtime provisioning

Titan makes hardware and runtime provisioning as easy as it can get. To make the provisioning it is just needed to specify the following information using a YAML format in a Markdown cell:

titan: v1
  image: scipy
    cpu: 2
    memory: 1024MB
    - !pip install requirements.txt

These are the availale options for each of the parameters:

  • image: Preferred runtime for the model (scipy, tensorflow, datascience…)
  • cpu: Preferred number of cores
  • memory: Desired memory in MB (NNNNMB). E.g. 1024MB, 2048MB…
  • command: Arbitrary commands

If no YAML configuration is provided, a default hardware configuration will be provisioned (Please note that this can be customized per customer)

  • Default hardware provisioning: 2 Cores - 2GB RAM

In the same manner, limits can be established for the hardware. E.g.:

  • Max hardware provisioning: 4 Cores - 8GB RAM

Instrumentalization basics

Instrumentalization is how we call the changes needed in the code of your models so Titan can deploy and scale them. This changes consist in specifying which cells in our Jupyter Notebook you want to transform into an API endpoint.

In order to instrumentalize a model, you must prefix the cell with a single line comment. The comment describes the HTTP method and resource, as in the following Python example:

# GET /hello/world
print("hello world")

The annotation above declares the cell contents as the code to execute when the kernel gateway receives a HTTP GET request for the path /hello/world

It is possible to define as many arbitrary endpoints as desired in a single model.

Available HTTP methods are:

  • GET
  • POST
  • PUT

For other languages, the comment prefix may change, but the rest of the annotation remains the same.

Multiple cells may share the same annotation. Their content is concatenated to form a single code segment at runtime.

Getting the request data

Before the gateway invokes an annotated cell, it sets the value of a global notebook variable named REQUEST to a JSON string containing information about the request. You may parse this string to access the request properties.

For example, in Python:

# GET /hello/world
req = json.loads(REQUEST)
# do something with req

You may specify path parameters when registering an endpoint by prepending a : to a path segment. For example, a path with parameters firstName and lastName would be defined as the following in a Python comment:

# GET /hello/:firstName/:lastName</span>

The REQUEST object currently contains the following properties:

  • body - The value of the body, see the Body And Content Type section below

  • args - An object with keys representing query parameter names and their associated values. A query parameter name may be specified multiple times in a valid URL, and so each value is a sequence (e.g., list, array) of strings from the original URL.

  • path - An object of key-value pairs representing path parameters and their values.

  • headers - An object of key-value pairs where a key is a HTTP header name and a value is the HTTP header value. If there are multiple values are specified for a header, the value will be an array.

Request Content-Type and Request Body Processing

If the HTTP request to the kernel gateway has a Content-Type header the value of REQUEST.body may change. Below is the list of outcomes for various mime-types:

  • application/json - The REQUEST.body will be an object of key-value pairs representing the request body

  • multipart/form-data and application/x-www-form-urlencoded - The REQUEST.body will be an object of key-value pairs representing the parameters and their values. Files are currently not supported for multipart/form-data

  • text/plain - The REQUEST.body will be the string value of the body

  • All other types will be sent as strings

Setting the Response Body

The response from an annotated cell may be set in one of two ways:

  1. Writing to stdout in a notebook cell

  2. Emitting output in a notebook cell

The first method is preferred because it is explicit: a cell writes to stdout using the appropriate language statement or function (e.g. Python print, Scala println, R print, etc.). The kernel gateway collects all bytes from kernel stdout and returns the entire byte string verbatim as the response body.

The second approach is used if nothing appears on stdout. This method is dependent upon language semantics, kernel implementation, and library usage. The response body will be the content.data structure in the Jupyter execute_result message.

In both cases, the response defaults to status 200 OK and Content-Type: text/plain if cell execution completes without error. If an error occurs, the status is 500 Internal Server Error. If the HTTP request method is not one supported at the given path, the status is 405 Not Supported. If you wish to return custom status or headers, see the next section.

In addition, it is possible to customize the response by annotating a separate cell with ResponseInfo and the status code, along with any other information such as the mimetype of the response, in a JSON format. This information will be used in the response to the client’s HTTP request.

# ResponseInfo PUT /route
    "status" : status,
    "headers" : {
        "Content-Type" : "application/json"

Swagger Spec

The resource /_api/spec/swagger.json is automatically generated from the notebook used to define the HTTP API. The response is a simple Swagger spec which can be used with the Swagger editor, a Swagger ui, or with any other Swagger-aware tool.

Currently, every response is listed as having a status of 200 OK.


The following example shows how to prepare and expose a function in titan.

It is recommended to prepare a local mock request to make local testing as shown below:

# Mock request object for local API testing
headers = {
    'content-type': 'application/json'
# Input data for the function
body = {
    'data': [[2],[5],[10],[20],[25]]
REQUEST = json.dumps({ 'headers': headers, body': body })

Once the request is ready, the POST method can be defined:

# POST /prediction
body = json.loads(REQUEST)['body']
response = function_to_be_exposed(body.data)

Please note how in this case the response is provided through Python print command:


Using Titan


In order to start using titan in the cloud, you should authenticate as user by running:

titan login

Please, use the credentials provided by our sales team.

Using Titan

Once your model has been duly instrumentalized, it is possible to use Titan to deploy and transform it into a ready-to-use REST API.

Using titan is very easy just run the following CLI command from the folder where your model is stored in:

titan deploy

The CLI will give you the option to select any of the available *.ipynb models in the folder.

After that you will be prompted to select the suitable running environment for your model. You can check the details of the available environments in the next section

And that’s it! Titan will manage everything from here and build, upload and deploy the model in the cloud to make it available through the specified endpoints at:

Deployed at: https://services.customer_name.akoios.com/model_name

alt text

For your convenience, a swagger UI interface is available at the aforementioned URL to:

  • Easily check and interact with the defined endpoints
  • View the deployed Notebook
  • Check the source code of the Notebook

alt text

Logging info

You can access to any titan running service logs by running:

titan logs

If you run:

titan logs -f

You will be able to check in real-time the different events ocurring in the service.

Command-line help

You can get more information about how to use the command-line tool by running:

titan help

    titan [<flags>] <command> [<args> ...]


    -h, --help       Output usage information.
    -C, --chdir="."  Change working directory.
    -v, --verbose    Enable verbose log output.
        --version    Show application version.


    help                 Show help for a command.
    config               Show current config manifest.
    deploy               Deploy a Jupyter Notebook in the cloud.
    docs                 Open documentation website in a browser.
    login                Authorize user.
    logs                 Inspect service logs.
    open                 Open service endpoint URL in a browser.
    rollback             Rollback a service deployment
    services list        List deployed services.
    services start       Start service.
    services stop        Stop service.
    services restart     Restart service.
    services delete      Delete service.
    upgrade              Install the latest or specified version of Titan.
    version              Show version.

    Deploy the project to the staging environment.
    $ titan
    Deploy the project to the production stage.
    $ titan deploy
    Manage services.
    $ titan services
    Start service.
    $ titan services start
    Stop service.
    $ titan services stop
    Delete service.
    $ titan services delete
    Rollback service version.
    $ titan rollback
    Open documentation in a browser.
    $ titan docs
    Open service endpoint URL in a browser.
    $ titan open
    Tail project logs.
    $ titan logs -f
    Show error or fatal level logs.
    $ titan logs 'error or fatal'
    Show help and examples for a command.
    $ titan help
    Upgrade titan, if a new version is available.
    $ titan upgrade
    Show current titan version.
    $ titan version


Titan approach to version control

Titan comes with a easy-to-use built-in versioning system for the deployed models.

In our product, version control is built upon a naming convention, Jupyter Notebook files with the same name will be considered as different versions of the same model in successive deployments.

Since this mechanism “freezes” both the model and the used dataset/s at a determined state, it ensures reproducibility and makes it easy to move between different versions of the model.

Let’s see this with an example. Imagine we deploy a model called helloworld.ipynb using:

titan deploy

This will create a first version of the deployment named v1. If an additional deployment is made with the same name helloworld.ipynb, it will automatically consider it as a new version which will be tagged as v2.

As easy as that!


The rollbackcommand allows to easily switch between different versions of a model.

By running titan rollback you will be prompted to select:

  • Which service do you want to rollback
  • Which version do you want to switch to

After selecting this, the chosen version will be automatically put into production!


Titan Dashboard

Apart from the CLI, Titan provides a web dashboard designed to monitor, control and manage the models and deployments of a Titan installation.


As aforementioned, Titan has been designed to boost the productivity in AI/ML projects and to help Data Scientists in their day-to-day tasks. Titan Web Dashboard provides information at different levels:

General information

In the main view, general information about the current Titan instalaltion is shown:

  • `# of current deployed models
  • `# of current active deployments
  • `# of available models
  • `# of requests per hour

Apart from this global metrics, a list of the currenty active services is shown in a list as depicted in the image below.

alt text

The available options for each of the services is:

  • Service Details (see next section)
  • Releases: Available releases of the service
  • Open: Automatically opens API information page
  • API: Automatically opens Swagger API interface
  • View notebook: Shows the deployed Jupyter Notebook
  • Download notebook: Downloads the model in .ipynb format

Service Details

The service details view provides relevant information regarding a deployed service:

  • Provisioning parameters
  • YAML specification
  • Available releases
  • Inline Swagger API interface
  • Inline Jupyter Notebook viewer

alt text



Titan uses a set of Docker image definitions. The following sections describe these images including their contents, relationships, and versioning strategy.


Base is a small image supporting the options common across all core stacks. It is the basis for all other stacks.

  • Minimally-functional Jupyter Notebook server (e.g., no pandoc for saving notebooks as PDFs)
  • Miniconda Python 3.x in /opt/conda
  • No preinstalled scientific computing packages
  • Unprivileged user jovyan (uid=1000, configurable, see options) in group users (gid=100) with ownership over the /home/jovyan and /opt/conda paths
  • tini as the container entrypoint and a start-notebook.sh script as the default command
  • A start-singleuser.sh script useful for launching containers in JupyterHub
  • A start.sh script useful for running alternative commands in the container (e.g. ipython, jupyter kernelgateway, jupyter lab)
  • Options for a self-signed HTTPS certificate and passwordless sudo


Minimal adds command line tools useful when working in Jupyter applications.

  • Includes everything in Base
  • Pandoc and TeX Live for notebook document conversion
  • git, emacs, jed, nano, tzdata, and unzip


Scipy includes popular packages from the scientific Python ecosystem.

  • Includes everything in Minimal and its ancestor images
  • pandas, numexpr, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh, sqlalchemy, hdf5, vincent, beautifulsoup, protobuf, and xlrd packages
  • ipywidgets for interactive visualizations in Python notebooks
  • Facets for visualizing machine learning datasets


Datascience includes libraries for data analysis from the Julia, Python, and R communities.

  • Includes everything in Scipy and their ancestor images
  • The Julia compiler and base environment
  • IJulia to support Julia code in Jupyter notebooks
  • HDF5, Gadfly, and RDatasets packages


Pyspark includes Python support for Apache Spark, optionally on Mesos.

  • Contains everything in Scipy and its ancestor images
  • Apache Spark with Hadoop binaries
  • Mesos client libraries


Titan CI/CD

Titan has been designed and built with interoperability in mind to facilitate its integration in any corporate IT architecture.

Titan can be easily invoked from CI/CD solutions, making it is possible to build useful pipelines combining both Titan Services and Jobs.

As an example, this would be a sample YAML specification to use Titan from Gitlab CI:

# Deploy stage will deploy our Titan service
  image: python:3.8
  stage: deploy
  # Install Titan CLI
  - curl -sf https://install.akoios.com/beta | sh
  # Deploy Notebook API service
  - titan deploy --image scipy helloworld.ipynb

It is important to remark that, in order to run Titan in these cases, it is needed to provide your Titan credentials. In all the most common CI/CD platforms, this can be easily done by definied secret environment variables containing this information.


Titan Jobs

Titan does not only work to deploy API services, it is also suitable to run arbitrary tasks using a batch approach.

Creating and deploying a Job using Titan is as easy as running:

titan jobs run script.py

titan jobs run will automatically deploy and run the specified script.

The use of Jobs is specially useful for Machine Learning in tasks such as data processing, model (re)training, feature generation, creating a CSV file or any other task you can think of.

To check all the deployed Jobs, there are two options:

a) Running titan jobs from the CLI b) Checking Titan’s Web Dashboard

If you need environment variables to run the script you just need to specify them inline:

titan jobs run -env FOO=bar

As it was shown in the previous section, it is possible to schedule the execution of our Python scripts at specific times through the use of any CI/CD solution. To create the Job, the process is similar to invoking titan deploy:

  image: python:3.8
  # Install Titan CLI
  - curl -sf https://install.akoios.com/beta | sh
  # Deploy and Run Job
  - titan jobs run sample-job.py

To make the schedule (e.g. run the Job each wednesday at 8:00) we can use for example the integrated scheduler in Gitlab CI:

alt text

After setting the scheduler, our Job will be executed as specified!


Common questions and problems

This section constains answers to typical problems users find when starting with Titan. It’s been arranged as a Q&A list to ensure the ease of use.

Question 1

Q: I am getting errors when running titan deploy to deploy a model.

A: First of all, make sure the models works and runs locally before trying to put in into production.

Question 2

Q: My model runs locally, but I am unable to deploy it using titan deploy.

A: Maybe you are ruuning into some dependency problems. Check out the following:

  • Make sure you are using the right environment (scipy, tensorflow, datascience) for your model.
  • Include the required packages in a requirements.txt located at the same folder as the Notebook you are deploying.

Question 3

Q: When running titan deploy I get the following error: Error: Notebook has no endpoints defined. Please, follow the instructions here: https://docs.akoios.com/#preparation

A: Make sure to include at least one endpoint. You can learn how to include endpoints here

Question 4

Q: The endpoint I defined in the Notebook is accessible, but it is not returning the expected results.

A: Check the following:

  • Make sure all the code you want to be executed upon an endpoint call is contained in the same Notebook cell
  • If your are using a /POST endpoint, make sure you are sending the request in the right format (e.g.‘application/json’)