A new methodology of voice conversion in cepstrum eigenspace based on structured Gaussian mixture model is proposed for non-parallel corpora without joint training. For each speaker, the cepstrum features of speech are extracted, and mapped to the eigenspace which is formed by eigenvectors of its scatter matrix, thereby the Structured Gaussian Mixture Model in the EigenSpace (SGMM-ES) is trained. The source and target speaker's SGMM-ES are matched based on Acoustic Universal Structure (AUS) principle to achieve spectrum transform function. Experimental results show the speaker identification rate of conversion speech achieves 95.25%, and the value of average cepstrum distortion is 1.25 which is 0.8% and 7.3% higher than the performance of SGMM method respectively. ABX and MOS evaluations indicate the conversion performance is quite close to the traditional method under the parallel corpora condition. The results show the eigenspace based structured Gaussian mixture model for voice conversion under the non-parallel corpora is effective.
针对非平行语料非联合训练条件下的语音转换,提出一种基于倒谱本征空间结构化高斯混合模型的方法。提取说话人语音倒谱特征参数之后,根据其散布矩阵计算本征向量构造倒谱本征空间并训练结构化高斯混合模型SGMM-ES(Structured Gaussian Mixture Model in Eigen Space)。源和目标说话人各自独立训练的SGMM-ES根据全局声学结构AUS(Acoustical Universal Structure)原理进行匹配对准,最终得到基于倒谱本征空间的短时谱转换函数。实验结果表明,转换语音的目标说话人平均识别率达到95.25%,平均谱失真度为1.25,相对基于原始倒谱特征空间的SGMM方法分别提高了0.8%和7.3%,而ABX和MOS测评表明转换性能非常接近于传统平行语料方法。这一结果说明采用倒谱本征空间结构化高斯混合模型进行非平行语料条件下的语音转换是有效的。
高斯混合模型采用固定混合数结构的建模方法并不符合说话人语音特征分布的多样性,从而出现过拟合或者欠拟合的情况并影响系统的识别性能。提出一种混合数可变的自适应高斯混合模型并将其应用于说话人识别。模型训练中根据说话人语音特征参数分布的聚类特性,采用吸收合并与分裂机制动态调整混合数以获得更加精确的拟合性能,提高系统识别率。实验结果显示,在特征参数MFCC和BFCC(Bilinear Frequency Cepstrum Coefficients)下相对误识率分别下降了41.41%和22.21%。