Skip to main content
All CollectionsTelnyx Flow
Getting Started with Telnyx Flow
Getting Started with Telnyx Flow

Flow empowers users across various teams to build and manage sophisticated chat-bots and automated workflows.

Dillin avatar
Written by Dillin
Updated over a month ago

Introduction - Beta Available!

Welcome to Telnyx Flow!
Telnyx Flow is a no-code tool that empowers anyone, even non-technical users, to create and manage text or voice based chat-bots and automated workflows. This guide will walk you through the essentials.


Getting Started

  1. Log In: Visit flow.telnyx.com, where you’ll be redirected to log into your Telnyx Mission Control account. If you don’t have an account, you can sign up here.

    • Note: You may be logged out after a period of inactivity. Simply log back into Mission Control and refresh Telnyx Flow to continue.

  2. Tour Guide: Explore Flow with a guided tour covering essential actions like creating workspaces, uploading documents, running similarity searches, and executing workflows with AI tools.


Key Features of Telnyx Flow

User-Friendly Interface

Telnyx Flow boasts a graphical interface with a canvas for creating nodes and connecting them via edges, simplifying complex tasks. This intuitive design allows users of all technical backgrounds to create and manage workflows effortlessly.

No-Code Development

Flow enables non-technical users to design and modify workflows without writing a single line of code. This feature accelerates creation and maintenance processes, making it accessible to a broader audience.

Enhanced Customer Service

With Telnyx Flow, you can create text or voice based chat-bots that provide instant responses, improving customer engagement and satisfaction. This capability is crucial for businesses looking to enhance their customer service operations.


Creating Your First Workflow

Understanding Workspaces

A workspace is your dedicated area to organize workflows. It’s a central hub for managing, testing, and organising your projects.

The Canvas

The canvas is where you build workflows visually. Right-click to add nodes, connect them to define the flow, and see everything come together in one place.


Nodes and Edges

Nodes

Nodes are the building blocks of your workflow, each representing a specific function or operation.

Key node types include:

Request Node

Initiates a flow via HTTP methods (POST, PUT, GET, PATCH, DELETE).

  • Set the name of the endpoint if you don't want to use the unique UUID.

  • Set Query Parameters that are expected to be received.

  • Set Body Parameters that are expected to be received.


AI Completion Node

Specify the language model of your choice for generating text or chat completions. Attach the request nodes edges to the AI Completion Node.

  • Node:

    • Name: Give the AI Completion Node a memorable name based on it's purpose.

    • Use dynamic variable syntax to pass-through the data from the request node. You define these as query or body parameters in the request node.

    • Examples:

      • User ID: The user who is engaging with your client

        • {{ http.request.query.userId }}

      • Session ID: Ensure your client generates a unique session ID for the users chat.

        • {{ http.request.query.sessionId }}

      • Query: Ensure your client passes through the users query from the request node.

        • {{ http.request.query.query }}

      • System Message: Define a system message that denotes the AI Completions behaviour or pass it through from your client.

        • {{ http.request.query.system_message }}

  • Authentication:

    • When using paid models like OpenAI, make sure to create a secret as it will be required to authenticate.

  • JSON Mode:

    • Disabled by default.

    • OpenAI gpt-4-1106-preview and gpt-3.5-turbo-1106 are the only models that support JSON Mode currently.

    • When JSON Mode is enabled AI Tools can't be attached as they are not meant to be used together.

    • JSON Mode allows users to structure and return data in a JSON format. This is particularly useful for integrating AI-generated responses with applications that require structured data.

    • Example:

      • Suppose a user asks the chatbot about the status of their order. The AI Completion node can be configured in JSON Mode to return a structured response like this:

        {
        "orderStatus": "shipped",
        "estimatedDelivery": "2024-06-10",
        "trackingNumber": "1234567890"
        }

      • The structured JSON response can be directly fed into a customer service dashboard, displaying order status, estimated delivery date, and tracking number in a user-friendly format.

  • Settings Cog:

    • Set the behavior of the LLM after this tool is called. You can choose to show feedback, show a help action, or call another tool.

      • Tool Choice: You can set this value to force the LLM to execute this tool next before responding.

      • Show Feedback:

        • Default is false.

        • A variable named show_feedback will be returned in the metadata of the response to this request.

        • Set to true if you want to prompt the end user to provide feedback to the answer.

      • Show Help Action:

        • Default is false.

        • Set to true to show the user a button to contact a live agent after responding.


AI Tool Node

Enhances the AI Completion node with additional function calls. Currently only supported with OpenAI & Llama Models but other models will be supported in the future.

  • Node:

    • Give the tool a clear name to differentiate it from other AI Tools.

    • Give the tool a clear description of it's purpose as the AI Completion Node will use this.

    • Give it clear system message for the desired behaviour the AI Tool should follow.

  • Arguments:

    • Add arguments to be generated by the LLM when this tool is called. These arguments can be used as dynamic variables.

      • The name of the argument is what it will be defined as to the LLM.

      • The type of the argument is the type of the value that the LLM is generating, in most cases will be a string.

        • Enumerators are separated using a comma (,).

      • The description of the argument is what the LLM will use to know how to generate it.

    • Example Use Case:

      • Name: support_method

      • Type: enum

      • Description: Select the method of support most suited to the user.

      • List of Enums:

        • live_chat,

        • call_support,

        • email_support,

      • This AI Tools purpose is to help provide the user with information on the different methods to contact the support team.

  • Output:

    • Custom Output: Manually write the output of this tool. Enabling this extension will prevent you from using any other extensions. You can use dynamic variables to create dynamic outputs.

      • This works well with the example use case above where the custom output would contain information that matches the enumerators.

        • ## Call Support (`call_support`)

          United States & Canada: +18889809750

        • ## Email Support (`email_support`)

          Telnyx Support: [support@telnyx.com]

    • Bucket Search: Enable this option to allow the LLM to search embedded content that you have uploaded into your buckets to generate answers to users questions.

  • Settings Cog:

    • Tool Choice: You can set this value to force the LLM to execute this tool next before responding.

    • Show Feedback:

      • Default is false.

      • A variable named show_feedback will be returned in the metadata of the response to this request.

      • Set to true if you want to prompt the end user to provide feedback to the answer.

    • Show Help Action:

      • Default is false.

      • Set to true to show the user a button to contact a live agent after responding.


Branch Node

Directs execution flow based on boolean conditions derived from canvas data.

  • Input Value: Supports dynamic variables.

  • Operators:

    • equals,

    • starts_with,

    • ends_with,

    • contains,

    • does_not_equal,

    • does_not_start_with,

    • does_not_end_with,

    • does_not_contain.

  • Execution Anchors:

    • True: Executes if the condition evaluates to true.

    • False: Executes if the condition evaluates to false.

  • Conditional Execution: Conditional edges (black dashed lines) behave differently than normal edges, executing the target node immediately when the condition is met.

  • Execution Logic: Conditional edges enable OR logic, while the Branch Node represents AND logic by requiring all conditions to be true.

  • Example Use Case:

    • Chatbot for Sales or Support Queries

      • Scenario: A chatbot needs to route user queries to either sales or support.

      • Condition: Check if the user's query contains the word "sales".

      • Setup:

        1. Branch Node: Configure the condition to check if the query contains "sales".

        2. True Execution Anchor: Connect to the Sales response node.

        3. False Execution Anchor: Connect to the Support response node.


Switch Node

The Switch Node works similarly to the Branch Node but allows for an unlimited number of output execution anchors.

  • Purpose: Conditionally execute multiple target nodes based on several condition groups.

  • Condition Groups: Each group is a collection of permissions that must all be true.

  • Output Execution Anchors: Each condition group has a corresponding output execution anchor.

  • Descriptive Names: Name each condition group clearly, like isProductKnown, for easy identification.

  • Execution Logic: If all conditions in a group are true, the target nodes connected to the corresponding output anchor are executed.

  • AND/OR Logic: Create AND conditions within the Switch Node. For OR logic, chain multiple switches or branches.

  • Example Use Case:

    • Chatbot for Multiple Query Types

      • Scenario: A chatbot needs to route user queries to various departments: sales, support, or billing.

      • Conditions: Check the query content for keywords related to each department.

      • Setup:

        1. Switch Node: Create condition groups for "sales", "support", and "billing".

          • Sales Condition Group: Checks if the query contains "sales".

          • Support Condition Group: Checks if the query contains "support".

          • Billing Condition Group: Checks if the query contains "billing".

        2. Output Execution Anchors: Connect each anchor to the corresponding department's response node.


HTTP Client Node

The HTTP Client Node allows users to configure and send HTTP requests to external services with or without authentication.

  • The configuration options include:

    • URL: Specify the request URL with query parameters.

    • Headers: Add any custom headers needed for the request.

    • Body Parameters: Include any body parameters required for the request.

  • Using the HTTP Client node can help where you may want to provide more specific context to your AI Completion nodes based on a users query before running a completion request or POSTing data after a completion request is run based on the output you receive.

  • Response Node: Ends the HTTP request and returns data to the client that initiated the request.


Edges

Edges define the connections between nodes, determining the workflow's execution path and data flow. Types of edges include:

  • Execution Edge (black/white): Controls the workflow's pathway and enables conditional decision-making through AND/OR logic.

  • Data Edge (blue): Transfers data between nodes, handling dynamic variables created during runtime.

  • Plugin Edge (orange): Enhances a node's capabilities by integrating additional tools during execution.


Building and Triggering Workflows

Manually Triggering Workflows

Workflows can be manually initiated by the end-user through a user interface. For example, in a customer application with an embedded chat interface, each user input can trigger an HTTP request to the server hosting the workflow.

Handling Customer Support Requests

Design a workflow that triggers when a customer submits a request via a chat interface. The workflow can start with a Request Node that captures the input, process it through an AI Completion Node to generate a response, and then use a Response Node to send the reply back to the customer. Additional nodes can be added to log the interaction and escalate to a human agent if needed.

Connecting Your Chat Interface with Telnyx Flow

When you create a workspace and a workflow inside your workspace, you’ll have access to an endpoint exposed publicly and can work with your Telnyx API Key. To connect your chat interface with a workflow, set up your chat application to send HTTP requests to this endpoint. You can view the endpoint URL by clicking the execute button on the canvas.

Example

curl --location 'https://api.telnyx.com/v2/flow/execute/dillin/dillins-assistant?user_id=1234789&session_id=abcdefg&question=How%20do%20I%20create%20a%20sip%20connection%3F' \
--header 'Authorization: Bearer KEYXX'

Where dillin is the workspace name and the workflow created is called dillins-assistant.

Will Telnyx provide an example chat widget?

Yes, in the near future, Telnyx will provide an example chat widget that you can easily embed on your website to integrate with your workflows.


Raise an Issue

If you have feedback or believe you’ve encountered a bug, please contact Telnyx NOC with the following information to help us address the issue efficiently:

  1. Use Case:

    • Specify whether you’re using Text Chatbots or Voice Chatbots, and briefly describe your goal.

    • Outline the nodes included in the workflow.

  2. Workspace Details:

    • Provide the Workspace Name or ID and the Workflow Name or ID.

  3. Error Documentation:

    • Include screenshots of any errors encountered.

    • Attach the full error message from the events tab when running workflows in the user interface.

  4. Date and Time:

    • Specify the date and time of the occurrence (including timezone), ideally within 48 hours of the issue arising.

  5. Reproducibility:

    • Indicate whether the issue is reproducible or occurs intermittently.

To report an issue, please email support@telnyx.com or use the chat feature within your Mission Control Portal account.

Providing these details will ensure a faster and more accurate resolution of your concern.


Flow Pricing

Currently, Telnyx Flow is free to use. This allows you to explore and utilise the tool without any incurring any additional costs outside of the underlying API usage with our storage, inference and voice api products, facilitating easy testing and proof-of-concept development, especially for AI enriched systems.

In the future, we may consider to introduce a usage or tier based billing system. This will allow you to access more features and higher usage limits through different plans.

For now, feel free to enjoy the full capabilities of Telnyx Flow at no charge. We will keep you updated on any changes to our pricing model.


Future Enhancements

We plan to integrate our Voice and SMS APIs through our graphical flow tool, offering further no-code access to Telnyx products between Q3 and Q4 of 2024.

This will enable users to:

  1. Design complex call control workflows with specific Voice API Command Nodes.

    1. Update - 24th October 2024

    2. Call Control Workflows are now available.

  2. Manage SMS interactions.


Limitations

We recommend disabling adblockers on flow.telnyx.com to ensure all parts of the website are accessible, including the interactive demo tutorials.


Technical Documentation

Make sure to check out further detailed documentation which is available within Telnyx Flow.

Did this answer your question?