我正在使用Google Speech API的Node.JS变体. 一切都很好,花花公子,直到我敢于传入一个字符串数组speech_context参数.当我尝试以下每种方式时,流中断,但不会发出错误.因此我无法诊断. 我传递了
一切都很好,花花公子,直到我敢于传入一个字符串数组speech_context参数.当我尝试以下每种方式时,流中断,但不会发出错误.因此我无法诊断.
我传递了一系列字符串[“一”,“两”,“三”],持久到the documentation,所以我相信.我的原始配置如下所示:
const cf = {
config: {
encoding: 'LINEAR16',
sampleRate: 48000
}
}
我试过cf.config.speech_context = ARRAY,cf.config.speech_context.phrases = ARRAY,cf.speech_context = ARRAY,cf.speech_context.phrases = ARRAY.
同样,我没有收到错误消息,也没有得到结果.我一无所获.原始配置本身有效.
这有一个基本流:
recognizeStream = speech.createRecognizeStream(cf)
.on('error', console.error)
.on('data', console.log)
想法?任何帮助将非常感谢!
const request = {
config: {
encoding: encoding,
sampleRate: sampleRate,
languageCode:'en-IN-x-longform',
}
};
// Stream the audio to the Google Cloud Speech API
const recognizeStream = speech.createRecognizeStream(request)
.on('error', (error) => {
console.error;
})
.on('data', (data) => {
console.log('Data received: %j', data);
if('results' in data)
console.log(chalk.bgYellow( data.results ));
logger.log(JSON.stringify(data));
});
// Stream an audio file from disk to the Speech API, e.g. "./resources/audio.raw"
filename = "./resources/audio.raw";
fs.createReadStream(filename).pipe(recognizeStream);
//This is working for me
//I think you are missing the pipe part in , so you can do
recognizeStream = speech.createRecognizeStream(cf)
.on('error', console.error)
.on('data', console.log)
filename = "./resources/audio.raw";
fs.createReadStream(filename).pipe(recognizeStream);
`
