.Guarantee being compatible with numerous structures, including.NET 6.0,. Web Structure 4.6.2, and.NET Requirement 2.0 as well as above.Minimize dependencies to prevent model disputes as well as the demand for binding redirects.Translating Audio Record.Among the primary capabilities of the SDK is audio transcription. Developers can translate audio reports asynchronously or in real-time. Below is actually an instance of exactly how to transcribe an audio file:.making use of AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local area reports, identical code can be made use of to obtain transcription.await making use of var stream = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also supports real-time sound transcription using Streaming Speech-to-Text. This component is particularly beneficial for uses calling for prompt processing of audio information.utilizing AssemblyAI.Realtime.await utilizing var transcriber = brand-new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for receiving sound from a microphone for instance.GetAudio( async (portion) => await transcriber.SendAudioAsync( piece)).wait for transcriber.CloseAsync().Using LeMUR for LLM Functions.The SDK incorporates with LeMUR to enable designers to develop large language version (LLM) applications on voice data. Listed here is an instance:.var lemurTaskParams = brand new LemurTaskParams.Cause="Supply a brief rundown of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intellect Designs.Additionally, the SDK comes with integrated assistance for audio intelligence versions, enabling conviction study and also other state-of-the-art functions.var records = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, check out the official AssemblyAI blog.Image resource: Shutterstock.