Skip to content

Releases: pipecat-ai/pipecat

v0.0.49

17 Nov 22:34
53f675f
Compare
Choose a tag to compare

Added

  • Added RTVI on_bot_started event which is useful in a single turn interaction.

  • Added DailyTransport events dialin-connected, dialin-stopped, dialin-error and dialin-warning. Needs daily-python >= 0.13.0.

  • Added RimeHttpTTSService and the 07q-interruptible-rime.py foundational example.

  • Added STTMuteFilter, a general-purpose processor that combines STT muting and interruption control. When active, it prevents both transcription and interruptions during bot speech. The processor supports multiple strategies: FIRST_SPEECH (mute only during bot's first speech), ALWAYS (mute during all bot speech), or CUSTOM (using provided callback).

  • Added STTMuteFrame, a control frame that enables/disables speech transcription in STT services.

v0.0.48

10 Nov 22:14
1d4be01
Compare
Choose a tag to compare

Added

  • There's now an input queue in each frame processor. When you call FrameProcessor.push_frame() this will internally call FrameProcessor.queue_frame() on the next processor (upstream or downstream) and the frame will be internally queued (except system frames). Then, the queued frames will get processed. With this input queue it is also possible for FrameProcessors to block processing more frames by calling FrameProcessor.pause_processing_frames(). The way to resume processing frames is by calling FrameProcessor.resume_processing_frames().

  • Added audio filter NoisereduceFilter.

  • Introduce input transport audio filters (BaseAudioFilter). Audio filters can be used to remove background noises before audio is sent to VAD.

  • Introduce output transport audio mixers (BaseAudioMixer). Output transport audio mixers can be used, for example, to add background sounds or any other audio mixing functionality before the output audio is actually written to the transport.

  • Added GatedOpenAILLMContextAggregator. This aggregator keeps the last received OpenAI LLM context frame and it doesn't let it through until the notifier is notified.

  • Added WakeNotifierFilter. This processor expects a list of frame types and will execute a given callback predicate when a frame of any of those type is being processed. If the callback returns true the notifier will be notified.

  • Added NullFilter. A null filter doesn't push any frames upstream or downstream. This is usually used to disable one of the pipelines in ParallelPipeline.

  • Added EventNotifier. This can be used as a very simple synchronization feature between processors.

  • Added TavusVideoService. This is an integration for Tavus digital twins. (see https://www.tavus.io/)

  • Added DailyTransport.update_subscriptions(). This allows you to have fine grained control of what media subscriptions you want for each participant in a room.

  • Added audio filter KrispFilter.

Changed

  • The following DailyTransport functions are now async which means they need to be awaited: start_dialout, stop_dialout, start_recording, stop_recording, capture_participant_transcription and capture_participant_video.

  • Changed default output sample rate to 24000. This changes all TTS service to output to 24000 and also the default output transport sample rate. This improves audio quality at the cost of some extra bandwidth.

  • AzureTTSService now uses Azure websockets instead of HTTP requests.

  • The previous AzureTTSService HTTP implementation is now AzureHttpTTSService.

Fixed

  • Websocket transports (FastAPI and Websocket) now synchronize with time before sending data. This allows for interruptions to just work out of the box.

  • Improved bot speaking detection for all TTS services by using actual bot audio.

  • Fixed an issue that was generating constant bot started/stopped speaking frames for HTTP TTS services.

  • Fixed an issue that was causing stuttering with AWS TTS service.

  • Fixed an issue with PlayHTTTSService, where the TTFB metrics were reporting very small time values.

  • Fixed an issue where AzureTTSService wasn't initializing the specified language.

Other

  • Add 23-bot-background-sound.py foundational example.

  • Added a new foundational example 22-natural-conversation.py. This example shows how to achieve a more natural conversation detecting when the user ends statement.

v0.0.47

22 Oct 17:32
a46eaa8
Compare
Choose a tag to compare

Added

  • Added AssemblyAISTTService and corresponding foundational examples 07o-interruptible-assemblyai.py and 13d-assemblyai-transcription.py.

  • Added a foundational example for Gladia transcription: 13c-gladia-transcription.py

Changed

  • Updated GladiaSTTService to use the V2 API.

  • Changed DailyTransport transcription model to nova-2-general.

Fixed

  • Fixed an issue that would cause an import error when importing SileroVADAnalyzer from the old package pipecat.vad.silero.

  • Fixed enable_usage_metrics to control LLM/TTS usage metrics separately from enable_metrics.

v0.0.46

20 Oct 01:28
ee5ae0d
Compare
Choose a tag to compare

Added

  • Added audio_passthrough parameter to STTService. If enabled it allows audio frames to be pushed downstream in case other processors need them.

  • Added input parameter options for PlayHTTTSService and PlayHTHttpTTSService.

Changed

  • Changed DeepgramSTTService model to nova-2-general.

  • Moved SileroVAD audio processor to processors.audio.vad.

  • Module utils.audio is now audio.utils. A new resample_audio function has been added.

  • PlayHTTTSService now uses PlayHT websockets instead of HTTP requests.

  • The previous PlayHTTTSService HTTP implementation is now PlayHTHttpTTSService.

  • PlayHTTTSService and PlayHTHttpTTSService now use a voice_engine of PlayHT3.0-mini, which allows for multi-lingual support.

  • Renamed OpenAILLMServiceRealtimeBeta to OpenAIRealtimeBetaLLMService to match other services.

Deprecated

  • LLMUserResponseAggregator and LLMAssistantResponseAggregator are mostly deprecated, use OpenAILLMContext instead.

  • The vad package is now deprecated and audio.vad should be used instead. The avd package will get removed in a future release.

Fixed

  • Fixed an issue that would cause an error if no VAD analyzer was passed to LiveKitTransport params.

  • Fixed SileroVAD processor to support interruptions properly.

Other

  • Added examples/foundational/07-interruptible-vad.py. This is the same as 07-interruptible.py but using the SileroVAD processor instead of passing the VADAnalyzer in the transport.

v0.0.45

16 Oct 16:19
4075b19
Compare
Choose a tag to compare

Changed

  • Metrics messages have moved out from the transport's base output into RTVI.

v0.0.44

16 Oct 01:15
d255b7d
Compare
Choose a tag to compare

Added

  • Added support for OpenAI Realtime API with the new OpenAILLMServiceRealtimeBeta processor. (see https://platform.openai.com/docs/guides/realtime/overview)

  • Added RTVIBotTranscriptionProcessor which will send the RTVI bot-transcription protocol message. These are TTS text aggregated (into sentences) messages.

  • Added new input params to the MarkdownTextFilter utility. You can set filter_code to filter code from text and filter_tables to filter tables from text.

  • Added CanonicalMetricsService. This processor uses the new AudioBufferProcessor to capture conversation audio and later send it to Canonical AI. (see https://canonical.chat/)

  • Added AudioBufferProcessor. This processor can be used to buffer mixed user and bot audio. This can later be saved into an audio file or processed by some audio analyzer.

  • Added on_first_participant_joined event to LiveKitTransport.

Changed

  • LLM text responses are now logged properly as unicode characters.

  • UserStartedSpeakingFrame, UserStoppedSpeakingFrame, BotStartedSpeakingFrame, BotStoppedSpeakingFrame, BotSpeakingFrame and UserImageRequestFrame are now based from SystemFrame

Fixed

  • Merge RTVIBotLLMProcessor/RTVIBotLLMTextProcessor and RTVIBotTTSProcessor/RTVIBotTTSTextProcessor to avoid out of order issues.

  • Fixed an issue in RTVI protocol that could cause a bot-llm-stopped or bot-tts-stopped message to be sent before a bot-llm-text or bot-tts-text message.

  • Fixed DeepgramSTTService constructor settings not being merged with default ones.

  • Fixed an issue in Daily transport that would cause tasks to be hanging if urgent transport messages were being sent from a transport event handler.

  • Fixed an issue in BaseOutputTransport that would cause EndFrame to be pushed downed too early and call FrameProcessor.cleanup() before letting the transport stop properly.

v0.0.43

10 Oct 21:13
66a76af
Compare
Choose a tag to compare

Added

  • Added a new util called MarkdownTextFilter which is a subclass of a new base class called BaseTextFilter. This is a configurable utility which is intended to filter text received by TTS services.

  • Added new RTVIUserLLMTextProcessor. This processor will send an RTVI user-llm-text message with the user content's that was sent to the LLM.

Changed

  • TransportMessageFrame doesn't have an urgent field anymore, instead there's now a TransportMessageUrgentFrame which is a SystemFrame and therefore skip all internal queuing.

  • For TTS services, convert inputted languages to match each service's language format.

Fixed

  • Fixed an issue where changing a language with the Deepgram STT service wouldn't apply the change. This was fixed by disconnecting and reconnecting when the language changes.

v0.0.42

02 Oct 21:08
65eeb0f
Compare
Choose a tag to compare

Added

  • SentryMetrics has been added to report frame processor metrics to Sentry. This is now possible because FrameProcessorMetrics can now be passed to FrameProcessor.

  • Added Google TTS service and corresponding foundational example 07n-interruptible-google.py

  • Added AWS Polly TTS support and 07m-interruptible-aws.py as an example.

  • Added InputParams to Azure TTS service.

  • Added LivekitTransport (audio-only for now).

  • RTVI 0.2.0 is now supported.

  • All FrameProcessors can now register event handlers.

tts = SomeTTSService(...)

@tts.event_handler("on_connected"):
async def on_connected(processor):
  ...
  • Added AsyncGeneratorProcessor. This processor can be used together with a FrameSerializer as an async generator. It provides a generator() function that returns an AsyncGenerator and that yields serialized frames.

  • Added EndTaskFrame and CancelTaskFrame. These are new frames that are meant to be pushed upstream to tell the pipeline task to stop nicely or immediately respectively.

  • Added configurable LLM parameters (e.g., temperature, top_p, max_tokens, seed) for OpenAI, Anthropic, and Together AI services along with corresponding setter functions.

  • Added sample_rate as a constructor parameter for TTS services.

  • Pipecat has a pipeline-based architecture. The pipeline consists of frame processors linked to each other. The elements traveling across the pipeline are called frames. To have a deterministic behavior the frames traveling through the pipeline should always be ordered, except system frames which are out-of-band frames. To achieve that, each frame processor should only output frames from a single task. In this version all the frame processors have their own task to push frames. That is, when push_frame() is called the given frame will be put into an internal queue (with the exception of system frames) and a frame processor task will push it out.

  • Added pipeline clocks. A pipeline clock is used by the output transport to know when a frame needs to be presented. For that, all frames now have an optional pts field (prensentation timestamp). There's currently just one clock implementation SystemClock and the pts field is currently only used for TextFrames (audio and image frames will be next).

  • A clock can now be specified to PipelineTask (defaults to SystemClock). This clock will be passed to each frame processor via the StartFrame.

  • Added CartesiaHttpTTSService.

  • DailyTransport now supports setting the audio bitrate to improve audio quality through the DailyParams.audio_out_bitrate parameter. The new default is 96kbps.

  • DailyTransport now uses the number of audio output channels (1 or 2) to set mono or stereo audio when needed.

  • Interruptions support has been added to TwilioFrameSerializer when using FastAPIWebsocketTransport.

  • Added new LmntTTSService text-to-speech service. (see https://www.lmnt.com/)

  • Added TTSModelUpdateFrame, TTSLanguageUpdateFrame, STTModelUpdateFrame, and STTLanguageUpdateFrame frames to allow you to switch models, language and voices in TTS and STT services.

  • Added new transcriptions.Language enum.

Changed

  • Context frames are now pushed downstream from assistant context aggregators.

  • Removed Silero VAD torch dependency.

  • Updated individual update settings frame classes into a single ServiceUpdateSettingsFrame class.

  • We now distinguish between input and output audio and image frames. We introduce InputAudioRawFrame, OutputAudioRawFrame, InputImageRawFrame and OutputImageRawFrame (and other subclasses of those). The input frames usually come from an input transport and are meant to be processed inside the pipeline to generate new frames. However, the input frames will not be sent through an output transport. The output frames can also be processed by any frame processor in the pipeline and they are allowed to be sent by the output transport.

  • ParallelTask has been renamed to SyncParallelPipeline. A SyncParallelPipeline is a frame processor that contains a list of different pipelines to be executed concurrently. The difference between a SyncParallelPipeline and a ParallelPipeline is that, given an input frame, the SyncParallelPipeline will wait for all the internal pipelines to complete. This is achieved by making sure the last processor in each of the pipelines is synchronous (e.g. an HTTP-based service that waits for the response).

  • StartFrame is back a system frame to make sure it's processed immediately by all processors. EndFrame stays a control frame since it needs to be ordered allowing the frames in the pipeline to be processed.

  • Updated MoondreamService revision to 2024-08-26.

  • CartesiaTTSService and ElevenLabsTTSService now add presentation timestamps to their text output. This allows the output transport to push the text frames downstream at almost the same time the words are spoken. We say "almost" because currently the audio frames don't have presentation timestamp but they should be played at roughly the same time.

  • DailyTransport.on_joined event now returns the full session data instead of just the participant.

  • CartesiaTTSService is now a subclass of TTSService.

  • DeepgramSTTService is now a subclass of STTService.

  • WhisperSTTService is now a subclass of SegmentedSTTService. A SegmentedSTTService is a STTService where the provided audio is given in a big chunk (i.e. from when the user starts speaking until the user stops speaking) instead of a continous stream.

Fixed

  • Fixed OpenAI multiple function calls.

  • Fixed a Cartesia TTS issue that would cause audio to be truncated in some cases.

  • Fixed a BaseOutputTransport issue that would stop audio and video rendering tasks (after receiving and EndFrame) before the internal queue was emptied, causing the pipeline to finish prematurely.

  • StartFrame should be the first frame every processor receives to avoid situations where things are not initialized (because initialization happens on StartFrame) and other frames come in resulting in undesired behavior.

Performance

  • obj_id() and obj_count() now use itertools.count avoiding the need of threading.Lock.

Other

v0.0.41

10 Sep 01:06
e038767
Compare
Choose a tag to compare

Added

  • Added LivekitFrameSerializer audio frame serializer.

Fixed

  • Fix FastAPIWebsocketOutputTransport variable name clash with subclass.

  • Fix an AnthropicLLMService issue with empty arguments in function calling.

Other

  • Fixed studypal example errors.

v0.0.40

20 Aug 18:52
Compare
Choose a tag to compare

Added

  • VAD parameters can now be dynamicallt updated using the VADParamsUpdateFrame.

  • ErrorFrame has now a fatal field to indicate the bot should exit if a fatal error is pushed upstream (false by default). A new FatalErrorFrame that sets this flag to true has been added.

  • AnthropicLLMService now supports function calling and initial support for prompt caching.
    (see https://www.anthropic.com/news/prompt-caching)

  • ElevenLabsTTSService can now specify ElevenLabs input parameters such as output_format.

  • TwilioFrameSerializer can now specify Twilio's and Pipecat's desired sample rates to use.

  • Added new on_participant_updated event to DailyTransport.

  • Added DailyRESTHelper.delete_room_by_name() and DailyRESTHelper.delete_room_by_url().

  • Added LLM and TTS usage metrics. Those are enabled when PipelineParams.enable_usage_metrics is True.

  • AudioRawFrames are now pushed downstream from the base output transport. This allows capturing the exact words the bot says by adding an STT service at the end of the pipeline.

  • Added new GStreamerPipelineSource. This processor can generate image or audio frames from a GStreamer pipeline (e.g. reading an MP4 file, and RTP stream or anything supported by GStreamer).

  • Added TransportParams.audio_out_is_live. This flag is False by default and it is useful to indicate we should not synchronize audio with sporadic images.

  • Added new BotStartedSpeakingFrame and BotStoppedSpeakingFrame control frames. These frames are pushed upstream and they should wrap BotSpeakingFrame.

  • Transports now allow you to register event handlers without decorators.

Changed

  • Support RTVI message protocol 0.1. This includes new messages, support for messages responses, support for actions, configuration, webhooks and a bunch of new cool stuff.
    (see https://docs.rtvi.ai/)

  • SileroVAD dependency is now imported via pip's silero-vad package.

  • ElevenLabsTTSService now uses eleven_turbo_v2_5 model by default.

  • BotSpeakingFrame is now a control frame.

  • StartFrame is now a control frame similar to EndFrame.

  • DeepgramTTSService now is more customizable. You can adjust the encoding and sample rate.

Fixed

  • TTSStartFrame and TTSStopFrame are now sent when TTS really starts and stops. This allows for knowing when the bot starts and stops speaking even with asynchronous services (like Cartesia).

  • Fixed AzureSTTService transcription frame timestamps.

  • Fixed an issue with DailyRESTHelper.create_room() expirations which would cause this function to stop working after the initial expiration elapsed.

  • Improved EndFrame and CancelFrame handling. EndFrame should end things gracefully while a CancelFrame should cancel all running tasks as soon as possible.

  • Fixed an issue in AIService that would cause a yielded None value to be processed.

  • RTVI's bot-ready message is now sent when the RTVI pipeline is ready and a first participant joins.

  • Fixed a BaseInputTransport issue that was causing incoming system frames to be queued instead of being pushed immediately.

  • Fixed a BaseInputTransport issue that was causing start/stop interruptions incoming frames to not cancel tasks and be processed properly.

Other

  • Added studypal example (from to the Cartesia folks!).

  • Most examples now use Cartesia.

  • Added examples foundational/19a-tools-anthropic.py, foundational/19b-tools-video-anthropic.py and foundational/19a-tools-togetherai.py.

  • Added examples foundational/18-gstreamer-filesrc.py and foundational/18a-gstreamer-videotestsrc.py that show how to use GStreamerPipelineSource.

  • Remove requests library usage.

  • Cleanup examples and use DailyRESTHelper.