Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Convolutional neural network (CNN) based methods, such as the convolutional encoder-decoder network, offer state-of-the-art results in monaural speech enhancement. In the conventional encoder-decoder network, large kernel size is often used to enhance the model capacity, which, however, results in low parameter efficiency. This could be addressed by using group convolution, as in AlexNet, where group convolutions are performed in parallel in each layer, before their outputs are concatenated. However, with the simple concatenation, the inter-channel dependency information may be lost. To address this, the Shuffle network re-arranges the outputs of each group before concatenating them, by taking part of the whole input sequence as the input to each group of convolution. In this work, we propose a new convolutional fusion network (CFN) for monaural speech enhancement by improving model performance, inter-channel dependency, information reuse and parameter efficiency. First, a new group convolutional fusion unit (GCFU) consisting of the standard and depth-wise separable CNN is used to reconstruct the signal. Second, the whole input sequence (full information) is fed simultaneously to two convolution networks in parallel, and their outputs are re-arranged (shuffled) and then concatenated, in order to exploit the inter-channel dependency within the network. Third, the intra skip connection mechanism is used to connect different layers inside the encoder as well as decoder to further improve the model performance. Extensive experiments are performed to show the improved performance of the proposed method as compared with three recent baseline methods.

More information Original publication

DOI

10.1016/j.neunet.2021.05.017

Type

Journal article

Publication Date

2021-11-01T00:00:00+00:00

Volume

143

Pages

97 - 107

Total pages

10

Addresses

I, n, t, e, l, l, i, g, e, n, t, , S, e, n, s, i, n, g, , a, n, d, , C, o, m, m, u, n, i, c, a, t, i, o, n, s, , R, e, s, e, a, r, c, h, , G, r, o, u, p, ,, , S, c, h, o, o, l, , o, f, , E, n, g, i, n, e, e, r, i, n, g, ,, , N, e, w, c, a, s, t, l, e, , U, n, i, v, e, r, s, i, t, y, ,, , N, e, w, c, a, s, t, l, e, , u, p, o, n, ,, , T, y, n, e, , N, E, 1, , 7, R, U, ,, , U, K, ;, , C, o, l, l, e, g, e, , o, f, , C, o, m, p, u, t, e, r, , a, n, d, , C, o, m, m, u, n, i, c, a, t, i, o, n, , E, n, g, i, n, e, e, r, i, n, g, ,, , Z, h, e, n, g, Z, h, o, u, , U, n, i, v, e, r, s, i, t, y, , o, f, , L, i, g, h, t, , I, n, d, u, s, t, r, y, ,, , Z, h, e, n, g, z, h, o, u, ,, , C, h, i, n, a, ., , E, l, e, c, t, r, o, n, i, c, , a, d, d, r, e, s, s, :, , Y, ., x, i, a, n, 2, @, n, e, w, c, a, s, t, l, e, ., a, c, ., u, k, .

Keywords

Speech, Neural Networks, Computer