iOS Overview
Spokestack can be integrated with iOS apps developed in Objective-C and Swift.
Integrations by Feature
Add speech recognition, language understanding, and text-to-speech to your iOS app with one simple API.
Select a specific feature you’d like to integrate with for more details:
let pipeline = SpeechPipelineBuilder()
.addListener(self)
.useProfile(.tfliteWakewordAppleSpeech)
.setProperty("tracing", Trace.Level.PERF)
.setProperty("detectModelPath", "detect.tflite")
.setProperty("encodeModelPath", "encode.tflite")
.setProperty("filterModelPath", "filter.tflite")
.build()
pipeline.start()
Swift
Try a Wake Word in Your Browser
Test a wake word model by pressing “Start test,” then saying “Spokestack”. Wait a few seconds for results. This browser tester is experimental.
Say “Spokestack” when testing
Instructions
- Test a model by pressing "start test" above
- Then, try saying any of the utterances listed above. Wait a few seconds after saying an utterance for a confirmation to appear.
Spokestack iOS SDKs
Spokestack manages voice interactions and delivers actionable user commands with just a few lines of code. To integrate, first decide whether or not you’ll need to include a drop-in UI widget.
For step-by-step instructions on how to use
spokestack-ios
check out our Getting Started Guide.For step-by-step instructions on how to use
spokestack-tray-ios
check out our iOS Tray Tutorial.Related Resources
Want to dive deeper into the world of Android voice integration? We've got a lot to say on the subject: