Skip to main content
All CollectionsTelnyx Flow
Getting Started with Telnyx Flow
Getting Started with Telnyx Flow

Flow empowers users across various teams to build and manage sophisticated chat-bots and automated workflows.

Dillin avatar
Written by Dillin
Updated over a week ago

Introduction - Beta Available!

Welcome to the launch of Telnyx Flow, our innovative no-code workflow product. Telnyx Flow is designed to empower users across various teams to build and manage sophisticated chat-bots and automated workflows without needing technical expertise.

This guide will walk you through the key features, benefits, and usage of Telnyx Flow.

Logging In

When you visit, you'll be redirected to log into your Mission Control Portal via a pop-up window. Ensure you have your credentials ready for a seamless login experience. If you don't have a Mission Control Portal account make sure to sign up here.

Note: The mission control portal session may be cleared after some time and you will be logged out. Simply log back into your mission control portal account and refresh Flow to regain access.

Tour Guide

Visit Telnyx Flow today, sign in with your Telnyx Mission Control Portal account, and take a walk-through of the Tour.

Walkthrough how to:

  1. Create A Workspace

  2. Upload & Embed Documentation

  3. Run Similarity Searches

  4. Execute A Workflow with an AI Tool Bucket Search

Technical Documentation

Make sure to check out further detailed documentation which is available within Telnyx Flow.

Key Features of Telnyx Flow

User-Friendly Interface

Telnyx Flow boasts a graphical interface with a canvas for creating nodes and connecting them via edges, simplifying complex tasks. This intuitive design allows users of all technical backgrounds to create and manage workflows effortlessly.

No-Code Development

Flow enables non-technical users to design and modify workflows without writing a single line of code. This feature accelerates creation and maintenance processes, making it accessible to a broader audience.

AI Assistant

Flow includes its very own workflow built AI Assistant, to help answer questions about the product directly within the platform. This feature ensures users can get immediate support and guidance as they navigate the tool.

Enhanced Customer Service

With Telnyx Flow, you can create chat-bots that provide instant responses, improving customer engagement and satisfaction. This capability is crucial for businesses looking to enhance their customer service operations.

Creating Your First Workflow

Understanding Workspaces

A workspace in Telnyx Flow is a dedicated environment where users can create, manage, and organise their workflows. It serves as a central hub for all your workflow-related activities, providing a structured and collaborative space for designing and executing workflows.

The Canvas

The canvas is the heart of Telnyx Flow, a graph-based interface that simplifies workflow construction and management. Users can add nodes by right-clicking on the canvas and connect them with edges to design complex workflows. The canvas supports multiple workflows simultaneously, separated in different tabs, with unsaved changes indicated by a grey dot on the respective tab.

Nodes and Edges


Nodes are the building blocks of your workflow, each representing a specific function or operation.

Key node types include:

Request Node

Initiates a flow via HTTP methods (POST, PUT, GET, PATCH, DELETE).

  • Set the name of the endpoint if you don't want to use the unique UUID.

  • Set Query Parameters that are expected to be received.

  • Set Body Parameters that are expected to be received.

AI Completion Node

Specify the language model of your choice for generating text or chat completions. Attach the request nodes edges to the AI Completion Node.

  • Node:

    • Name: Give the AI Completion Node a memorable name based on it's purpose.

    • Use dynamic variable syntax to pass-through the data from the request node. You define these as query or body parameters in the request node.

    • Examples:

      • User ID: The user who is engaging with your client

        • {{ http.request.query.userId }}

      • Session ID: Ensure your client generates a unique session ID for the users chat.

        • {{ http.request.query.sessionId }}

      • Query: Ensure your client passes through the users query from the request node.

        • {{ http.request.query.query }}

      • System Message: Define a system message that denotes the AI Completions behaviour or pass it through from your client.

        • {{ http.request.query.system_message }}

  • Authentication:

    • When using paid models like OpenAI, make sure to create a secret as it will be required to authenticate.

  • JSON Mode:

    • Disabled by default.

    • OpenAI gpt-4-1106-preview and gpt-3.5-turbo-1106 are the only models that support JSON Mode currently.

    • When JSON Mode is enabled AI Tools can't be attached as they are not meant to be used together.

    • JSON Mode allows users to structure and return data in a JSON format. This is particularly useful for integrating AI-generated responses with applications that require structured data.

    • Example:

      • Suppose a user asks the chatbot about the status of their order. The AI Completion node can be configured in JSON Mode to return a structured response like this:

        "orderStatus": "shipped",
        "estimatedDelivery": "2024-06-10",
        "trackingNumber": "1234567890"

      • The structured JSON response can be directly fed into a customer service dashboard, displaying order status, estimated delivery date, and tracking number in a user-friendly format.

  • Settings Cog:

    • Set the behavior of the LLM after this tool is called. You can choose to show feedback, show a help action, or call another tool.

      • Tool Choice: You can set this value to force the LLM to execute this tool next before responding.

      • Show Feedback:

        • Default is false.

        • A variable named show_feedback will be returned in the metadata of the response to this request.

        • Set to true if you want to prompt the end user to provide feedback to the answer.

      • Show Help Action:

        • Default is false.

        • Set to true to show the user a button to contact a live agent after responding.

AI Tool Node

Enhances the AI Completion node with additional function calls. Currently only supported with OpenAI & Llama Models but other models will be supported in the future.

  • Node:

    • Give the tool a clear name to differentiate it from other AI Tools.

    • Give the tool a clear description of it's purpose as the AI Completion Node will use this.

    • Give it clear system message for the desired behaviour the AI Tool should follow.

  • Arguments:

    • Add arguments to be generated by the LLM when this tool is called. These arguments can be used as dynamic variables.

      • The name of the argument is what it will be defined as to the LLM.

      • The type of the argument is the type of the value that the LLM is generating, in most cases will be a string.

        • Enumerators are separated using a comma (,).

      • The description of the argument is what the LLM will use to know how to generate it.

    • Example Use Case:

      • Name: support_method

      • Type: enum

      • Description: Select the method of support most suited to the user.

      • List of Enums:

        • live_chat,

        • call_support,

        • email_support,

      • This AI Tools purpose is to help provide the user with information on the different methods to contact the support team.

  • Output:

    • Custom Output: Manually write the output of this tool. Enabling this extension will prevent you from using any other extensions. You can use dynamic variables to create dynamic outputs.

      • This works well with the example use case above where the custom output would contain information that matches the enumerators.

        • ## Call Support (`call_support`)

          United States & Canada: +18889809750

        • ## Email Support (`email_support`)

          Telnyx Support: []

    • Bucket Search: Enable this option to allow the LLM to search embedded content that you have uploaded into your buckets to generate answers to users questions.

  • Settings Cog:

    • Tool Choice: You can set this value to force the LLM to execute this tool next before responding.

    • Show Feedback:

      • Default is false.

      • A variable named show_feedback will be returned in the metadata of the response to this request.

      • Set to true if you want to prompt the end user to provide feedback to the answer.

    • Show Help Action:

      • Default is false.

      • Set to true to show the user a button to contact a live agent after responding.

Branch Node

Directs execution flow based on boolean conditions derived from canvas data.

  • Input Value: Supports dynamic variables.

  • Operators:

    • equals,

    • starts_with,

    • ends_with,

    • contains,

    • does_not_equal,

    • does_not_start_with,

    • does_not_end_with,

    • does_not_contain.

  • Execution Anchors:

    • True: Executes if the condition evaluates to true.

    • False: Executes if the condition evaluates to false.

  • Conditional Execution: Conditional edges (black dashed lines) behave differently than normal edges, executing the target node immediately when the condition is met.

  • Execution Logic: Conditional edges enable OR logic, while the Branch Node represents AND logic by requiring all conditions to be true.

  • Example Use Case:

    • Chatbot for Sales or Support Queries

      • Scenario: A chatbot needs to route user queries to either sales or support.

      • Condition: Check if the user's query contains the word "sales".

      • Setup:

        1. Branch Node: Configure the condition to check if the query contains "sales".

        2. True Execution Anchor: Connect to the Sales response node.

        3. False Execution Anchor: Connect to the Support response node.

Switch Node

The Switch Node works similarly to the Branch Node but allows for an unlimited number of output execution anchors.

  • Purpose: Conditionally execute multiple target nodes based on several condition groups.

  • Condition Groups: Each group is a collection of permissions that must all be true.

  • Output Execution Anchors: Each condition group has a corresponding output execution anchor.

  • Descriptive Names: Name each condition group clearly, like isProductKnown, for easy identification.

  • Execution Logic: If all conditions in a group are true, the target nodes connected to the corresponding output anchor are executed.

  • AND/OR Logic: Create AND conditions within the Switch Node. For OR logic, chain multiple switches or branches.

  • Example Use Case:

    • Chatbot for Multiple Query Types

      • Scenario: A chatbot needs to route user queries to various departments: sales, support, or billing.

      • Conditions: Check the query content for keywords related to each department.

      • Setup:

        1. Switch Node: Create condition groups for "sales", "support", and "billing".

          • Sales Condition Group: Checks if the query contains "sales".

          • Support Condition Group: Checks if the query contains "support".

          • Billing Condition Group: Checks if the query contains "billing".

        2. Output Execution Anchors: Connect each anchor to the corresponding department's response node.

HTTP Client Node

The HTTP Client Node allows users to configure and send HTTP requests to external services with or without authentication.

  • The configuration options include:

    • URL: Specify the request URL with query parameters.

    • Headers: Add any custom headers needed for the request.

    • Body Parameters: Include any body parameters required for the request.

  • Using the HTTP Client node can help where you may want to provide more specific context to your AI Completion nodes based on a users query before running a completion request or POSTing data after a completion request is run based on the output you receive.

  • Response Node: Ends the HTTP request and returns data to the client that initiated the request.


Edges define the connections between nodes, determining the workflow's execution path and data flow. Types of edges include:

  • Execution Edge (black/white): Controls the workflow's pathway and enables conditional decision-making through AND/OR logic.

  • Data Edge (blue): Transfers data between nodes, handling dynamic variables created during runtime.

  • Plugin Edge (orange): Enhances a node's capabilities by integrating additional tools during execution.

Building and Triggering Workflows

Manually Triggering Workflows

Workflows can be manually initiated by the end-user through a user interface. For example, in a customer application with an embedded chat interface, each user input can trigger an HTTP request to the server hosting the workflow.

Handling Customer Support Requests

Design a workflow that triggers when a customer submits a request via a chat interface. The workflow can start with a Request Node that captures the input, process it through an AI Completion Node to generate a response, and then use a Response Node to send the reply back to the customer. Additional nodes can be added to log the interaction and escalate to a human agent if needed.

Connecting Your Chat Interface with Telnyx Flow

When you create a workspace and a workflow inside your workspace, you’ll have access to an endpoint exposed publicly and can work with your Telnyx API Key. To connect your chat interface with a workflow, set up your chat application to send HTTP requests to this endpoint. You can view the endpoint URL by clicking the execute button on the canvas.


curl --location '' \
--header 'Authorization: Bearer KEYXX'

Where dillin is the workspace name and the workflow created is called dillins-assistant.

Will Telnyx provide an example chat widget?

Yes, in the near future, Telnyx will provide an example chat widget that you can easily embed on your website to integrate with your workflows.

Flow Pricing

Currently, Telnyx Flow is free to use. This allows you to explore and utilise the tool without any incurring any additional costs outside of the underlying API usage with our storage and inference products, facilitating easy testing and proof-of-concept development, especially for AI systems.

In the future, we may consider to introduce a usage or tier based billing system. This will where you can access more features and higher usage limits through different plans.

For now, feel free to enjoy the full capabilities of Telnyx Flow at no charge. We will keep you updated on any changes to our pricing model.

Future Enhancements

We plan to integrate our Voice and SMS APIs through our graphical flow tool, offering further no-code access to Telnyx products between Q3 and Q4 of 2024.

This will enable users to:

  1. Design complex call control workflows with specific Voice API Command Nodes.

  2. Trigger actions in response to webhooks.

  3. Manage SMS interactions.


  • Telnyx Flow can only be accessed by organisation owners at this time.

    • In the near future, Telnyx Flow will become available to sub-members of an organisation.

    • Until this time, sub-members of an organisation will not be able to log in.

    • Update - 17th July 2024

      • Sub members of an organisation can now access Flow.

      • Organisation owners can create permission groups in Flow and assign them to sub members.

  • We recommend disabling adblockers on to ensure all parts of the website are accessible, including the interactive demo tutorials.

Did this answer your question?