Loading

AI Blitz #9

Autocorrect on sound files prediction

Adding on the benchmark solution to include autocorrect on the predictions to push up on the LB :)

falak

Building on the awesome notebook by @shubhamaicrowd, I added autocorrect on the predictions which gives a good edge on the LB>

Building further on the starter code: Adding autocorrect at the model output for improved results.

We take the starter notebook as is and apply the autocorrect at the end which improves the results by quite a bit over the baseline

Autocorrection by JohnSnowLabs worked really well in practise on a lot of incorrectly spelled words. Most of the code for autocorrect is borrowed from their notebook and updated to match our use case

Install packages 🗃

In [1]:
!pip install aicrowd-cli
!mkdir assets
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/1f/57/59b5a00c6e90c9cc028b3da9dff90e242ad2847e735b1a0e81a21c616e27/aicrowd_cli-0.1.7-py3-none-any.whl (49kB)
     |████████████████████████████████| 51kB 4.6MB/s 
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/bc/91/b38c4fabb6e5092ab23492ded4f318ab7299b19263272b703478038c0fbc/GitPython-3.1.18-py3-none-any.whl (170kB)
     |████████████████████████████████| 174kB 16.3MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/69/a1/660d718e61d4c64fb8f1ef7b4aaf6db7a48a2b720cfac2991f06561d9a6c/rich-10.4.0-py3-none-any.whl (206kB)
     |████████████████████████████████| 215kB 50.6MB/s 
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/b4/20/9f1e974bb4761128fc0d0a32813eaa92827309b1756c4b892d28adfb4415/tqdm-4.61.1-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 12.4MB/s 
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 10.3MB/s 
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 9.4MB/s 
Requirement already satisfied: typing-extensions>=3.7.4.0; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from gitpython<4,>=3.1.12->aicrowd-cli) (3.7.4.3)
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 10.6MB/s 
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 7.8MB/s 
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2021.5.30)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: smmap, gitdb, gitpython, colorama, commonmark, rich, tqdm, requests, requests-toolbelt, aicrowd-cli
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
Successfully installed aicrowd-cli-0.1.7 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.18 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.4.0 smmap-4.0.0 tqdm-4.61.1

Installing DeepSpeech

Now, all what we are doing in the below 4 cells is to setting up environment for Deepspeech, is a really trick part to do in this whole notebook

In [2]:
!git clone --branch v0.9.3 https://github.com/mozilla/DeepSpeech
Cloning into 'DeepSpeech'...
remote: Enumerating objects: 23874, done.
remote: Counting objects: 100% (411/411), done.
remote: Compressing objects: 100% (185/185), done.
remote: Total 23874 (delta 231), reused 359 (delta 213), pack-reused 23463
Receiving objects: 100% (23874/23874), 49.48 MiB | 26.13 MiB/s, done.
Resolving deltas: 100% (16362/16362), done.
Note: checking out 'f2e9c85880dff94115ab510cde9ca4af7ee51c19'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

Install DeepSpeech Dependencies

All the steps taken for this section are from Train IITM

In [3]:
%cd /content/
!sudo apt-get install python3-venv
!sudo apt-get install python3-dev
!pip install --upgrade pip
!sudo apt-get install sox
!sudo apt-get install sox libsox-fmt-mp3
!sudo apt install git
!pip install librosa==0.7.2
!sudo apt-get install pciutils
!lspci | grep -i nvidia

!wget https://github.com/git-lfs/git-lfs/releases/download/v2.11.0/git-lfs-linux-amd64-v2.11.0.tar.gz
!tar xvf /content/git-lfs-linux-amd64-v2.11.0.tar.gz -C /content
!sudo bash /content/install.sh
%cd /content/DeepSpeech
!git-lfs pull

!wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl
!pip install /content/DeepSpeech/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl

!pip3 install folium==0.2.1
!pip3 install --upgrade pip==20.0.2 wheel==0.34.2 setuptools==46.1.3
!pip3 install --upgrade --force-reinstall -e .
/content
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  python-pip-whl python3.6-venv
The following NEW packages will be installed:
  python-pip-whl python3-venv python3.6-venv
0 upgraded, 3 newly installed, 0 to remove and 39 not upgraded.
Need to get 1,660 kB of archives.
After this operation, 1,902 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python-pip-whl all 9.0.1-2.3~ubuntu1.18.04.5 [1,653 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python3.6-venv amd64 3.6.9-1~18.04ubuntu1.4 [6,188 B]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python3-venv amd64 3.6.7-1~18.04 [1,208 B]
Fetched 1,660 kB in 1s (1,183 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 3.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin: 
Selecting previously unselected package python-pip-whl.
(Reading database ... 160772 files and directories currently installed.)
Preparing to unpack .../python-pip-whl_9.0.1-2.3~ubuntu1.18.04.5_all.deb ...
Unpacking python-pip-whl (9.0.1-2.3~ubuntu1.18.04.5) ...
Selecting previously unselected package python3.6-venv.
Preparing to unpack .../python3.6-venv_3.6.9-1~18.04ubuntu1.4_amd64.deb ...
Unpacking python3.6-venv (3.6.9-1~18.04ubuntu1.4) ...
Selecting previously unselected package python3-venv.
Preparing to unpack .../python3-venv_3.6.7-1~18.04_amd64.deb ...
Unpacking python3-venv (3.6.7-1~18.04) ...
Setting up python-pip-whl (9.0.1-2.3~ubuntu1.18.04.5) ...
Setting up python3.6-venv (3.6.9-1~18.04ubuntu1.4) ...
Setting up python3-venv (3.6.7-1~18.04) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Reading package lists... Done
Building dependency tree       
Reading state information... Done
python3-dev is already the newest version (3.6.7-1~18.04).
python3-dev set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded.
Collecting pip
  Downloading https://files.pythonhosted.org/packages/47/ca/f0d790b6e18b3a6f3bd5e80c2ee4edbb5807286c21cdd0862ca933f751dd/pip-21.1.3-py3-none-any.whl (1.5MB)
     |████████████████████████████████| 1.6MB 8.1MB/s 
Installing collected packages: pip
  Found existing installation: pip 19.3.1
    Uninstalling pip-19.3.1:
      Successfully uninstalled pip-19.3.1
Successfully installed pip-21.1.3
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libmagic-mgc libmagic1 libopencore-amrnb0 libopencore-amrwb0 libsox-fmt-alsa
  libsox-fmt-base libsox3
Suggested packages:
  file libsox-fmt-all
The following NEW packages will be installed:
  libmagic-mgc libmagic1 libopencore-amrnb0 libopencore-amrwb0 libsox-fmt-alsa
  libsox-fmt-base libsox3 sox
0 upgraded, 8 newly installed, 0 to remove and 39 not upgraded.
Need to get 760 kB of archives.
After this operation, 6,717 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libopencore-amrnb0 amd64 0.1.3-2.1 [92.0 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libopencore-amrwb0 amd64 0.1.3-2.1 [45.8 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagic-mgc amd64 1:5.32-2ubuntu0.4 [184 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagic1 amd64 1:5.32-2ubuntu0.4 [68.6 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox3 amd64 14.4.2-3ubuntu0.18.04.1 [226 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-alsa amd64 14.4.2-3ubuntu0.18.04.1 [10.6 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-base amd64 14.4.2-3ubuntu0.18.04.1 [32.1 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 sox amd64 14.4.2-3ubuntu0.18.04.1 [101 kB]
Fetched 760 kB in 1s (603 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 8.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin: 
Selecting previously unselected package libopencore-amrnb0:amd64.
(Reading database ... 160810 files and directories currently installed.)
Preparing to unpack .../0-libopencore-amrnb0_0.1.3-2.1_amd64.deb ...
Unpacking libopencore-amrnb0:amd64 (0.1.3-2.1) ...
Selecting previously unselected package libopencore-amrwb0:amd64.
Preparing to unpack .../1-libopencore-amrwb0_0.1.3-2.1_amd64.deb ...
Unpacking libopencore-amrwb0:amd64 (0.1.3-2.1) ...
Selecting previously unselected package libmagic-mgc.
Preparing to unpack .../2-libmagic-mgc_1%3a5.32-2ubuntu0.4_amd64.deb ...
Unpacking libmagic-mgc (1:5.32-2ubuntu0.4) ...
Selecting previously unselected package libmagic1:amd64.
Preparing to unpack .../3-libmagic1_1%3a5.32-2ubuntu0.4_amd64.deb ...
Unpacking libmagic1:amd64 (1:5.32-2ubuntu0.4) ...
Selecting previously unselected package libsox3:amd64.
Preparing to unpack .../4-libsox3_14.4.2-3ubuntu0.18.04.1_amd64.deb ...
Unpacking libsox3:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Selecting previously unselected package libsox-fmt-alsa:amd64.
Preparing to unpack .../5-libsox-fmt-alsa_14.4.2-3ubuntu0.18.04.1_amd64.deb ...
Unpacking libsox-fmt-alsa:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Selecting previously unselected package libsox-fmt-base:amd64.
Preparing to unpack .../6-libsox-fmt-base_14.4.2-3ubuntu0.18.04.1_amd64.deb ...
Unpacking libsox-fmt-base:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Selecting previously unselected package sox.
Preparing to unpack .../7-sox_14.4.2-3ubuntu0.18.04.1_amd64.deb ...
Unpacking sox (14.4.2-3ubuntu0.18.04.1) ...
Setting up libmagic-mgc (1:5.32-2ubuntu0.4) ...
Setting up libmagic1:amd64 (1:5.32-2ubuntu0.4) ...
Setting up libopencore-amrnb0:amd64 (0.1.3-2.1) ...
Setting up libopencore-amrwb0:amd64 (0.1.3-2.1) ...
Setting up libsox3:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Setting up libsox-fmt-base:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Setting up libsox-fmt-alsa:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Setting up sox (14.4.2-3ubuntu0.18.04.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Reading package lists... Done
Building dependency tree       
Reading state information... Done
sox is already the newest version (14.4.2-3ubuntu0.18.04.1).
The following additional packages will be installed:
  libid3tag0 libmad0
The following NEW packages will be installed:
  libid3tag0 libmad0 libsox-fmt-mp3
0 upgraded, 3 newly installed, 0 to remove and 39 not upgraded.
Need to get 112 kB of archives.
After this operation, 370 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libid3tag0 amd64 0.15.1b-13 [31.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libmad0 amd64 0.15.1b-9ubuntu18.04.1 [64.6 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libsox-fmt-mp3 amd64 14.4.2-3ubuntu0.18.04.1 [15.9 kB]
Fetched 112 kB in 1s (133 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 3.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin: 
Selecting previously unselected package libid3tag0:amd64.
(Reading database ... 160898 files and directories currently installed.)
Preparing to unpack .../libid3tag0_0.15.1b-13_amd64.deb ...
Unpacking libid3tag0:amd64 (0.15.1b-13) ...
Selecting previously unselected package libmad0:amd64.
Preparing to unpack .../libmad0_0.15.1b-9ubuntu18.04.1_amd64.deb ...
Unpacking libmad0:amd64 (0.15.1b-9ubuntu18.04.1) ...
Selecting previously unselected package libsox-fmt-mp3:amd64.
Preparing to unpack .../libsox-fmt-mp3_14.4.2-3ubuntu0.18.04.1_amd64.deb ...
Unpacking libsox-fmt-mp3:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Setting up libid3tag0:amd64 (0.15.1b-13) ...
Setting up libmad0:amd64 (0.15.1b-9ubuntu18.04.1) ...
Setting up libsox-fmt-mp3:amd64 (14.4.2-3ubuntu0.18.04.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Reading package lists... Done
Building dependency tree       
Reading state information... Done
git is already the newest version (1:2.17.1-1ubuntu0.8).
0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded.
Collecting librosa==0.7.2
  Downloading librosa-0.7.2.tar.gz (1.6 MB)
     |████████████████████████████████| 1.6 MB 8.4 MB/s 
Requirement already satisfied: audioread>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (2.1.9)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.19.5)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.4.1)
Requirement already satisfied: scikit-learn!=0.19.0,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.22.2.post1)
Requirement already satisfied: joblib>=0.12 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.0.1)
Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (4.4.2)
Requirement already satisfied: six>=1.3 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (1.15.0)
Requirement already satisfied: resampy>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.2.2)
Requirement already satisfied: numba>=0.43.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.51.2)
Requirement already satisfied: soundfile>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from librosa==0.7.2) (0.10.3.post1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.43.0->librosa==0.7.2) (57.0.0)
Requirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.43.0->librosa==0.7.2) (0.34.0)
Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.7/dist-packages (from soundfile>=0.9.0->librosa==0.7.2) (1.14.5)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.0->soundfile>=0.9.0->librosa==0.7.2) (2.20)
Building wheels for collected packages: librosa
  Building wheel for librosa (setup.py) ... done
  Created wheel for librosa: filename=librosa-0.7.2-py3-none-any.whl size=1612900 sha256=92f57a0dd04e403ac9f0082e4c98c8f4d4badcad153fa27868dc72d6b850cf51
  Stored in directory: /root/.cache/pip/wheels/18/9e/42/3224f85730f92fa2925f0b4fb6ef7f9c5431a64dfc77b95b39
Successfully built librosa
Installing collected packages: librosa
  Attempting uninstall: librosa
    Found existing installation: librosa 0.8.1
    Uninstalling librosa-0.8.1:
      Successfully uninstalled librosa-0.8.1
Successfully installed librosa-0.7.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libpci3
The following NEW packages will be installed:
  libpci3 pciutils
0 upgraded, 2 newly installed, 0 to remove and 39 not upgraded.
Need to get 281 kB of archives.
After this operation, 1,430 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpci3 amd64 1:3.5.2-1ubuntu1.1 [24.1 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 pciutils amd64 1:3.5.2-1ubuntu1.1 [257 kB]
Fetched 281 kB in 1s (270 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 2.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin: 
Selecting previously unselected package libpci3:amd64.
(Reading database ... 160920 files and directories currently installed.)
Preparing to unpack .../libpci3_1%3a3.5.2-1ubuntu1.1_amd64.deb ...
Unpacking libpci3:amd64 (1:3.5.2-1ubuntu1.1) ...
Selecting previously unselected package pciutils.
Preparing to unpack .../pciutils_1%3a3.5.2-1ubuntu1.1_amd64.deb ...
Unpacking pciutils (1:3.5.2-1ubuntu1.1) ...
Setting up libpci3:amd64 (1:3.5.2-1ubuntu1.1) ...
Setting up pciutils (1:3.5.2-1ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
00:04.0 3D controller: NVIDIA Corporation Device 1eb8 (rev a1)
--2021-06-29 17:13:23--  https://github.com/git-lfs/git-lfs/releases/download/v2.11.0/git-lfs-linux-amd64-v2.11.0.tar.gz
Resolving github.com (github.com)... 192.30.255.112
Connecting to github.com (github.com)|192.30.255.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/13021798/fa85ce00-9147-11ea-9ec4-c204e7a4e6cd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T171323Z&X-Amz-Expires=300&X-Amz-Signature=645936baf536b83cd1cfc78931119fd271d11b7cb9218de98651d7c7878eb53c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=13021798&response-content-disposition=attachment%3B%20filename%3Dgit-lfs-linux-amd64-v2.11.0.tar.gz&response-content-type=application%2Foctet-stream [following]
--2021-06-29 17:13:23--  https://github-releases.githubusercontent.com/13021798/fa85ce00-9147-11ea-9ec4-c204e7a4e6cd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T171323Z&X-Amz-Expires=300&X-Amz-Signature=645936baf536b83cd1cfc78931119fd271d11b7cb9218de98651d7c7878eb53c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=13021798&response-content-disposition=attachment%3B%20filename%3Dgit-lfs-linux-amd64-v2.11.0.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.109.154, 185.199.110.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4666614 (4.5M) [application/octet-stream]
Saving to: ‘git-lfs-linux-amd64-v2.11.0.tar.gz’

git-lfs-linux-amd64 100%[===================>]   4.45M  20.1MB/s    in 0.2s    

2021-06-29 17:13:24 (20.1 MB/s) - ‘git-lfs-linux-amd64-v2.11.0.tar.gz’ saved [4666614/4666614]

README.md
CHANGELOG.md
git-lfs
install.sh
Git LFS initialized.
/content/DeepSpeech
--2021-06-29 17:13:24--  https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl
Resolving github.com (github.com)... 192.30.255.113
Connecting to github.com (github.com)|192.30.255.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/60273704/743cbd00-b1c6-11ea-96f6-79d96377b886?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T171324Z&X-Amz-Expires=300&X-Amz-Signature=6cc447d7b739e3076477d8a8281e5d83120a0be9a6521de053efc16afa174291&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Dds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl&response-content-type=application%2Foctet-stream [following]
--2021-06-29 17:13:24--  https://github-releases.githubusercontent.com/60273704/743cbd00-b1c6-11ea-96f6-79d96377b886?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T171324Z&X-Amz-Expires=300&X-Amz-Signature=6cc447d7b739e3076477d8a8281e5d83120a0be9a6521de053efc16afa174291&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Dds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.110.154, 185.199.108.154, 185.199.111.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.110.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1852926 (1.8M) [application/octet-stream]
Saving to: ‘ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl’

ds_ctcdecoder-0.7.4 100%[===================>]   1.77M  --.-KB/s    in 0.09s   

2021-06-29 17:13:25 (19.1 MB/s) - ‘ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl’ saved [1852926/1852926]

ERROR: ds_ctcdecoder-0.7.4-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.
Collecting folium==0.2.1
  Downloading folium-0.2.1.tar.gz (69 kB)
     |████████████████████████████████| 69 kB 4.9 MB/s 
Requirement already satisfied: Jinja2 in /usr/local/lib/python3.7/dist-packages (from folium==0.2.1) (2.11.3)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2->folium==0.2.1) (2.0.1)
Building wheels for collected packages: folium
  Building wheel for folium (setup.py) ... done
  Created wheel for folium: filename=folium-0.2.1-py3-none-any.whl size=79808 sha256=9bf2fc661237d810f4387a1a264e67d2b225d45a0bc324d3a1e9f2de3e0f3951
  Stored in directory: /root/.cache/pip/wheels/9a/f0/3a/3f79a6914ff5affaf50cabad60c9f4d565283283c97f0bdccf
Successfully built folium
Installing collected packages: folium
  Attempting uninstall: folium
    Found existing installation: folium 0.8.3
    Uninstalling folium-0.8.3:
      Successfully uninstalled folium-0.8.3
Successfully installed folium-0.2.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Collecting pip==20.0.2
  Downloading pip-20.0.2-py2.py3-none-any.whl (1.4 MB)
     |████████████████████████████████| 1.4 MB 8.3 MB/s 
Collecting wheel==0.34.2
  Downloading wheel-0.34.2-py2.py3-none-any.whl (26 kB)
Collecting setuptools==46.1.3
  Downloading setuptools-46.1.3-py3-none-any.whl (582 kB)
     |████████████████████████████████| 582 kB 62.2 MB/s 
Installing collected packages: wheel, setuptools, pip
  Attempting uninstall: wheel
    Found existing installation: wheel 0.36.2
    Uninstalling wheel-0.36.2:
      Successfully uninstalled wheel-0.36.2
  Attempting uninstall: setuptools
    Found existing installation: setuptools 57.0.0
    Uninstalling setuptools-57.0.0:
      Successfully uninstalled setuptools-57.0.0
  Attempting uninstall: pip
    Found existing installation: pip 21.1.3
    Uninstalling pip-21.1.3:
      Successfully uninstalled pip-21.1.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.5.0 requires wheel~=0.35, but you have wheel 0.34.2 which is incompatible.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.25.1 which is incompatible.
Successfully installed pip-20.0.2 setuptools-46.1.3 wheel-0.34.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Obtaining file:///content/DeepSpeech
Collecting numpy
  Downloading numpy-1.21.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
     |████████████████████████████████| 15.7 MB 111 kB/s 
Collecting progressbar2
  Downloading progressbar2-3.53.1-py2.py3-none-any.whl (25 kB)
Collecting six
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting pyxdg
  Downloading pyxdg-0.27-py2.py3-none-any.whl (49 kB)
     |████████████████████████████████| 49 kB 8.4 MB/s 
Collecting attrdict
  Downloading attrdict-2.0.1-py2.py3-none-any.whl (9.9 kB)
Collecting absl-py
  Downloading absl_py-0.13.0-py3-none-any.whl (132 kB)
     |████████████████████████████████| 132 kB 73.0 MB/s 
Collecting semver
  Downloading semver-2.13.0-py2.py3-none-any.whl (12 kB)
Collecting opuslib==2.0.0
  Downloading opuslib-2.0.0.tar.gz (7.3 kB)
Collecting optuna
  Downloading optuna-2.8.0-py3-none-any.whl (301 kB)
     |████████████████████████████████| 301 kB 54.1 MB/s 
Collecting sox
  Downloading sox-1.4.1-py2.py3-none-any.whl (39 kB)
Collecting bs4
  Downloading bs4-0.0.1.tar.gz (1.1 kB)
Collecting pandas
  Downloading pandas-1.2.5-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (9.9 MB)
     |████████████████████████████████| 9.9 MB 78 kB/s 
Collecting requests
  Using cached requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting numba==0.47.0
  Downloading numba-0.47.0-cp37-cp37m-manylinux1_x86_64.whl (3.7 MB)
     |████████████████████████████████| 3.7 MB 55.1 MB/s 
Collecting llvmlite==0.31.0
  Downloading llvmlite-0.31.0-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB)
     |████████████████████████████████| 20.2 MB 53 kB/s 
Collecting librosa
  Downloading librosa-0.8.1-py3-none-any.whl (203 kB)
     |████████████████████████████████| 203 kB 71.6 MB/s 
Collecting soundfile
  Downloading SoundFile-0.10.3.post1-py2.py3-none-any.whl (21 kB)
Collecting ds_ctcdecoder==0.9.3
  Downloading ds_ctcdecoder-0.9.3-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB)
     |████████████████████████████████| 2.1 MB 53.3 MB/s 
Collecting tensorflow==1.15.4
  Downloading tensorflow-1.15.4-cp37-cp37m-manylinux2010_x86_64.whl (110.5 MB)
     |████████████████████████████████| 110.5 MB 16 kB/s 
Collecting python-utils>=2.3.0
  Downloading python_utils-2.5.6-py2.py3-none-any.whl (12 kB)
Collecting cmaes>=0.8.2
  Downloading cmaes-0.8.2-py3-none-any.whl (15 kB)
Collecting scipy!=1.4.0
  Downloading scipy-1.7.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.5 MB)
     |████████████████████████████████| 28.5 MB 33 kB/s 
Collecting cliff
  Downloading cliff-3.8.0-py3-none-any.whl (80 kB)
     |████████████████████████████████| 80 kB 10.5 MB/s 
Collecting alembic
  Downloading alembic-1.6.5-py2.py3-none-any.whl (164 kB)
     |████████████████████████████████| 164 kB 72.1 MB/s 
Collecting packaging>=20.0
  Downloading packaging-20.9-py2.py3-none-any.whl (40 kB)
     |████████████████████████████████| 40 kB 7.6 MB/s 
Collecting sqlalchemy>=1.1.0
  Downloading SQLAlchemy-1.4.20-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.5 MB)
     |████████████████████████████████| 1.5 MB 49.5 MB/s 
Collecting colorlog
  Downloading colorlog-5.0.1-py2.py3-none-any.whl (10 kB)
Collecting tqdm
  Using cached tqdm-4.61.1-py2.py3-none-any.whl (75 kB)
Collecting beautifulsoup4
  Downloading beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
     |████████████████████████████████| 115 kB 78.6 MB/s 
Collecting pytz>=2017.3
  Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB)
     |████████████████████████████████| 510 kB 75.4 MB/s 
Collecting python-dateutil>=2.7.3
  Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
     |████████████████████████████████| 227 kB 76.0 MB/s 
Collecting idna<3,>=2.5
  Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
     |████████████████████████████████| 58 kB 8.8 MB/s 
Collecting urllib3<1.27,>=1.21.1
  Downloading urllib3-1.26.6-py2.py3-none-any.whl (138 kB)
     |████████████████████████████████| 138 kB 79.2 MB/s 
Collecting chardet<5,>=3.0.2
  Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
     |████████████████████████████████| 178 kB 65.1 MB/s 
Collecting certifi>=2017.4.17
  Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB)
     |████████████████████████████████| 145 kB 78.7 MB/s 
Collecting setuptools
  Downloading setuptools-57.0.0-py3-none-any.whl (821 kB)
     |████████████████████████████████| 821 kB 49.7 MB/s 
Collecting joblib>=0.14
  Downloading joblib-1.0.1-py3-none-any.whl (303 kB)
     |████████████████████████████████| 303 kB 77.8 MB/s 
Collecting decorator>=3.0.0
  Downloading decorator-5.0.9-py3-none-any.whl (8.9 kB)
Collecting pooch>=1.0
  Downloading pooch-1.4.0-py3-none-any.whl (51 kB)
     |████████████████████████████████| 51 kB 801 kB/s 
Collecting audioread>=2.0.0
  Downloading audioread-2.1.9.tar.gz (377 kB)
     |████████████████████████████████| 377 kB 62.1 MB/s 
Collecting resampy>=0.2.2
  Downloading resampy-0.2.2.tar.gz (323 kB)
     |████████████████████████████████| 323 kB 70.1 MB/s 
Collecting scikit-learn!=0.19.0,>=0.14.0
  Downloading scikit_learn-0.24.2-cp37-cp37m-manylinux2010_x86_64.whl (22.3 MB)
     |████████████████████████████████| 22.3 MB 1.4 MB/s 
Collecting cffi>=1.0
  Downloading cffi-1.14.5-cp37-cp37m-manylinux1_x86_64.whl (402 kB)
     |████████████████████████████████| 402 kB 60.9 MB/s 
Collecting keras-preprocessing>=1.0.5
  Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
     |████████████████████████████████| 42 kB 1.8 MB/s 
Collecting wrapt>=1.11.1
  Downloading wrapt-1.12.1.tar.gz (27 kB)
Collecting protobuf>=3.6.1
  Downloading protobuf-3.17.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
     |████████████████████████████████| 1.0 MB 51.3 MB/s 
Collecting google-pasta>=0.1.6
  Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
     |████████████████████████████████| 57 kB 7.5 MB/s 
Collecting keras-applications>=1.0.8
  Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
     |████████████████████████████████| 50 kB 9.1 MB/s 
Collecting gast==0.2.2
  Downloading gast-0.2.2.tar.gz (10 kB)
Collecting wheel>=0.26; python_version >= "3"
  Downloading wheel-0.36.2-py2.py3-none-any.whl (35 kB)
Collecting tensorflow-estimator==1.15.1
  Downloading tensorflow_estimator-1.15.1-py2.py3-none-any.whl (503 kB)
     |████████████████████████████████| 503 kB 60.2 MB/s 
Collecting termcolor>=1.1.0
  Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Collecting grpcio>=1.8.6
  Downloading grpcio-1.38.1-cp37-cp37m-manylinux2014_x86_64.whl (4.2 MB)
     |████████████████████████████████| 4.2 MB 29.3 MB/s 
Collecting tensorboard<1.16.0,>=1.15.0
  Downloading tensorboard-1.15.0-py3-none-any.whl (3.8 MB)
     |████████████████████████████████| 3.8 MB 53.9 MB/s 
Collecting opt-einsum>=2.3.2
  Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
     |████████████████████████████████| 65 kB 5.7 MB/s 
Collecting astor>=0.6.0
  Downloading astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting PyYAML>=3.12
  Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
     |████████████████████████████████| 636 kB 72.8 MB/s 
Collecting pyparsing>=2.1.0
  Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
     |████████████████████████████████| 67 kB 8.5 MB/s 
Collecting cmd2>=1.0.0
  Downloading cmd2-2.1.1-py3-none-any.whl (140 kB)
     |████████████████████████████████| 140 kB 71.9 MB/s 
Collecting stevedore>=2.0.1
  Downloading stevedore-3.3.0-py3-none-any.whl (49 kB)
     |████████████████████████████████| 49 kB 7.6 MB/s 
Collecting pbr!=2.1.0,>=2.0.0
  Downloading pbr-5.6.0-py2.py3-none-any.whl (111 kB)
     |████████████████████████████████| 111 kB 75.0 MB/s 
Collecting PrettyTable>=0.7.2
  Downloading prettytable-2.1.0-py3-none-any.whl (22 kB)
Collecting Mako
  Downloading Mako-1.1.4-py2.py3-none-any.whl (75 kB)
     |████████████████████████████████| 75 kB 6.3 MB/s 
Collecting python-editor>=0.3
  Downloading python_editor-1.0.4-py3-none-any.whl (4.9 kB)
Collecting importlib-metadata; python_version < "3.8"
  Downloading importlib_metadata-4.6.0-py3-none-any.whl (17 kB)
Collecting greenlet!=0.4.17; python_version >= "3"
  Downloading greenlet-1.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (160 kB)
     |████████████████████████████████| 160 kB 73.6 MB/s 
Collecting soupsieve>1.2; python_version >= "3.0"
  Downloading soupsieve-2.2.1-py3-none-any.whl (33 kB)
Collecting appdirs
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting threadpoolctl>=2.0.0
  Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB)
Collecting pycparser
  Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB)
     |████████████████████████████████| 112 kB 77.5 MB/s 
Collecting h5py
  Downloading h5py-3.3.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.1 MB)
     |████████████████████████████████| 4.1 MB 56.0 MB/s 
Collecting markdown>=2.6.8
  Downloading Markdown-3.3.4-py3-none-any.whl (97 kB)
     |████████████████████████████████| 97 kB 9.7 MB/s 
Collecting werkzeug>=0.11.15
  Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
     |████████████████████████████████| 288 kB 74.0 MB/s 
Collecting attrs>=16.3.0
  Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
     |████████████████████████████████| 53 kB 3.3 MB/s 
Collecting pyperclip>=1.6
  Downloading pyperclip-1.8.2.tar.gz (20 kB)
Collecting colorama>=0.3.7
  Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Collecting typing-extensions; python_version < "3.8"
  Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)
Collecting wcwidth>=0.1.7
  Downloading wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting MarkupSafe>=0.9.2
  Downloading MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl (31 kB)
Collecting zipp>=0.5
  Downloading zipp-3.4.1-py3-none-any.whl (5.2 kB)
Collecting cached-property; python_version < "3.8"
  Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)
Building wheels for collected packages: opuslib, bs4, audioread, resampy, wrapt, gast, termcolor, pyperclip
  Building wheel for opuslib (setup.py) ... done
  Created wheel for opuslib: filename=opuslib-2.0.0-py3-none-any.whl size=11009 sha256=4125d1a6d893fd989a7b773b53bed1a4ab71c1a135911b0d2ea8ca713d9afd30
  Stored in directory: /root/.cache/pip/wheels/e5/ba/d4/0e81231a9797fbb262ae3a54fd761fab850db7f32d94a3283a
  Building wheel for bs4 (setup.py) ... done
  Created wheel for bs4: filename=bs4-0.0.1-py3-none-any.whl size=1272 sha256=f3c979312eede87d2838bf7af922f90904d4613bd7f121e8da66e6bca216ed6c
  Stored in directory: /root/.cache/pip/wheels/0a/9e/ba/20e5bbc1afef3a491f0b3bb74d508f99403aabe76eda2167ca
  Building wheel for audioread (setup.py) ... done
  Created wheel for audioread: filename=audioread-2.1.9-py3-none-any.whl size=23142 sha256=26dfa6ea8bcbc8dfe0328382b04424ce2e66c949d51c780d71c65e37e9f1101d
  Stored in directory: /root/.cache/pip/wheels/ba/7b/eb/213741ccc0678f63e346ab8dff10495995ca3f426af87b8d88
  Building wheel for resampy (setup.py) ... done
  Created wheel for resampy: filename=resampy-0.2.2-py3-none-any.whl size=320720 sha256=ff4370a96d3cd330e2089d75bd48387e1e9eaa645c7251e69e6ff7ea1df16760
  Stored in directory: /root/.cache/pip/wheels/a0/18/0a/8ad18a597d8333a142c9789338a96a6208f1198d290ece356c
  Building wheel for wrapt (setup.py) ... done
  Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=68668 sha256=77fc41e23cfc77a9e1911c1055b5c2e289ea5e30557bce622ed2d884c2d28094
  Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
  Building wheel for gast (setup.py) ... done
  Created wheel for gast: filename=gast-0.2.2-py3-none-any.whl size=7539 sha256=050a8e58f6e3a5f48be3d51474a55b6c028973ac90144d765062d48ad575b414
  Stored in directory: /root/.cache/pip/wheels/21/7f/02/420f32a803f7d0967b48dd823da3f558c5166991bfd204eef3
  Building wheel for termcolor (setup.py) ... done
  Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=8cfc167a1009bc124aae18a784446aed825c3af46c2a64d3eb0a110772d01884
  Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
  Building wheel for pyperclip (setup.py) ... done
  Created wheel for pyperclip: filename=pyperclip-1.8.2-py3-none-any.whl size=11107 sha256=d46c3eca0646d5c7d0dd034a0fc5b7efff41ce2b4f29a7e37be495865a176184
  Stored in directory: /root/.cache/pip/wheels/9f/18/84/8f69f8b08169c7bae2dde6bd7daf0c19fca8c8e500ee620a28
Successfully built opuslib bs4 audioread resampy wrapt gast termcolor pyperclip
ERROR: tensorflow 1.15.4 has requirement numpy<1.19.0,>=1.16.0, but you'll have numpy 1.21.0 which is incompatible.
ERROR: tensorflow-probability 0.12.1 has requirement gast>=0.3.2, but you'll have gast 0.2.2 which is incompatible.
ERROR: tensorflow-metadata 1.0.0 has requirement absl-py<0.13,>=0.9, but you'll have absl-py 0.13.0 which is incompatible.
ERROR: networkx 2.5.1 has requirement decorator<5,>=4.3, but you'll have decorator 5.0.9 which is incompatible.
ERROR: moviepy 0.2.3.5 has requirement decorator<5.0,>=4.0.2, but you'll have decorator 5.0.9 which is incompatible.
ERROR: kapre 0.3.5 has requirement tensorflow>=2.0.0, but you'll have tensorflow 1.15.4 which is incompatible.
ERROR: google-colab 1.0.0 has requirement pandas~=1.1.0; python_version >= "3.0", but you'll have pandas 1.2.5 which is incompatible.
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: google-colab 1.0.0 has requirement six~=1.15.0, but you'll have six 1.16.0 which is incompatible.
ERROR: flask 1.1.4 has requirement Werkzeug<2.0,>=0.15, but you'll have werkzeug 2.0.1 which is incompatible.
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.
Installing collected packages: numpy, six, python-utils, progressbar2, pyxdg, attrdict, absl-py, semver, opuslib, cmaes, scipy, PyYAML, pyparsing, attrs, zipp, typing-extensions, importlib-metadata, pyperclip, colorama, wcwidth, cmd2, pbr, stevedore, PrettyTable, cliff, MarkupSafe, Mako, python-editor, greenlet, sqlalchemy, python-dateutil, alembic, packaging, colorlog, tqdm, optuna, sox, soupsieve, beautifulsoup4, bs4, pytz, pandas, idna, urllib3, chardet, certifi, requests, setuptools, llvmlite, numba, pycparser, cffi, soundfile, joblib, decorator, appdirs, pooch, audioread, resampy, threadpoolctl, scikit-learn, librosa, ds-ctcdecoder, keras-preprocessing, wrapt, protobuf, google-pasta, cached-property, h5py, keras-applications, gast, wheel, tensorflow-estimator, termcolor, grpcio, markdown, werkzeug, tensorboard, opt-einsum, astor, tensorflow, deepspeech-training
  Attempting uninstall: numpy
    Found existing installation: numpy 1.19.5
    Uninstalling numpy-1.19.5:
      Successfully uninstalled numpy-1.19.5
  Attempting uninstall: six
    Found existing installation: six 1.15.0
    Uninstalling six-1.15.0:
      Successfully uninstalled six-1.15.0
  Attempting uninstall: python-utils
    Found existing installation: python-utils 2.5.6
    Uninstalling python-utils-2.5.6:
      Successfully uninstalled python-utils-2.5.6
  Attempting uninstall: progressbar2
    Found existing installation: progressbar2 3.38.0
    Uninstalling progressbar2-3.38.0:
      Successfully uninstalled progressbar2-3.38.0
  Attempting uninstall: absl-py
    Found existing installation: absl-py 0.12.0
    Uninstalling absl-py-0.12.0:
      Successfully uninstalled absl-py-0.12.0
  Attempting uninstall: semver
    Found existing installation: semver 2.13.0
    Uninstalling semver-2.13.0:
      Successfully uninstalled semver-2.13.0
  Attempting uninstall: scipy
    Found existing installation: scipy 1.4.1
    Uninstalling scipy-1.4.1:
      Successfully uninstalled scipy-1.4.1
  Attempting uninstall: PyYAML
    Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
  Attempting uninstall: pyparsing
    Found existing installation: pyparsing 2.4.7
    Uninstalling pyparsing-2.4.7:
      Successfully uninstalled pyparsing-2.4.7
  Attempting uninstall: attrs
    Found existing installation: attrs 21.2.0
    Uninstalling attrs-21.2.0:
      Successfully uninstalled attrs-21.2.0
  Attempting uninstall: zipp
    Found existing installation: zipp 3.4.1
    Uninstalling zipp-3.4.1:
      Successfully uninstalled zipp-3.4.1
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.7.4.3
    Uninstalling typing-extensions-3.7.4.3:
      Successfully uninstalled typing-extensions-3.7.4.3
  Attempting uninstall: importlib-metadata
    Found existing installation: importlib-metadata 4.5.0
    Uninstalling importlib-metadata-4.5.0:
      Successfully uninstalled importlib-metadata-4.5.0
  Attempting uninstall: colorama
    Found existing installation: colorama 0.4.4
    Uninstalling colorama-0.4.4:
      Successfully uninstalled colorama-0.4.4
  Attempting uninstall: wcwidth
    Found existing installation: wcwidth 0.2.5
    Uninstalling wcwidth-0.2.5:
      Successfully uninstalled wcwidth-0.2.5
  Attempting uninstall: PrettyTable
    Found existing installation: prettytable 2.1.0
    Uninstalling prettytable-2.1.0:
      Successfully uninstalled prettytable-2.1.0
  Attempting uninstall: MarkupSafe
    Found existing installation: MarkupSafe 2.0.1
    Uninstalling MarkupSafe-2.0.1:
      Successfully uninstalled MarkupSafe-2.0.1
  Attempting uninstall: greenlet
    Found existing installation: greenlet 1.1.0
    Uninstalling greenlet-1.1.0:
      Successfully uninstalled greenlet-1.1.0
  Attempting uninstall: sqlalchemy
    Found existing installation: SQLAlchemy 1.4.18
    Uninstalling SQLAlchemy-1.4.18:
      Successfully uninstalled SQLAlchemy-1.4.18
  Attempting uninstall: python-dateutil
    Found existing installation: python-dateutil 2.8.1
    Uninstalling python-dateutil-2.8.1:
      Successfully uninstalled python-dateutil-2.8.1
  Attempting uninstall: packaging
    Found existing installation: packaging 20.9
    Uninstalling packaging-20.9:
      Successfully uninstalled packaging-20.9
  Attempting uninstall: tqdm
    Found existing installation: tqdm 4.61.1
    Uninstalling tqdm-4.61.1:
      Successfully uninstalled tqdm-4.61.1
  Attempting uninstall: beautifulsoup4
    Found existing installation: beautifulsoup4 4.6.3
    Uninstalling beautifulsoup4-4.6.3:
      Successfully uninstalled beautifulsoup4-4.6.3
  Attempting uninstall: bs4
    Found existing installation: bs4 0.0.1
    Uninstalling bs4-0.0.1:
      Successfully uninstalled bs4-0.0.1
  Attempting uninstall: pytz
    Found existing installation: pytz 2018.9
    Uninstalling pytz-2018.9:
      Successfully uninstalled pytz-2018.9
  Attempting uninstall: pandas
    Found existing installation: pandas 1.1.5
    Uninstalling pandas-1.1.5:
      Successfully uninstalled pandas-1.1.5
  Attempting uninstall: idna
    Found existing installation: idna 2.10
    Uninstalling idna-2.10:
      Successfully uninstalled idna-2.10
  Attempting uninstall: urllib3
    Found existing installation: urllib3 1.24.3
    Uninstalling urllib3-1.24.3:
      Successfully uninstalled urllib3-1.24.3
  Attempting uninstall: chardet
    Found existing installation: chardet 3.0.4
    Uninstalling chardet-3.0.4:
      Successfully uninstalled chardet-3.0.4
  Attempting uninstall: certifi
    Found existing installation: certifi 2021.5.30
    Uninstalling certifi-2021.5.30:
      Successfully uninstalled certifi-2021.5.30
  Attempting uninstall: requests
    Found existing installation: requests 2.25.1
    Uninstalling requests-2.25.1:
      Successfully uninstalled requests-2.25.1
  Attempting uninstall: setuptools
    Found existing installation: setuptools 46.1.3
    Uninstalling setuptools-46.1.3:
      Successfully uninstalled setuptools-46.1.3
  Attempting uninstall: llvmlite
    Found existing installation: llvmlite 0.34.0
    Uninstalling llvmlite-0.34.0:
      Successfully uninstalled llvmlite-0.34.0
  Attempting uninstall: numba
    Found existing installation: numba 0.51.2
    Uninstalling numba-0.51.2:
      Successfully uninstalled numba-0.51.2
  Attempting uninstall: pycparser
    Found existing installation: pycparser 2.20
    Uninstalling pycparser-2.20:
      Successfully uninstalled pycparser-2.20
  Attempting uninstall: cffi
    Found existing installation: cffi 1.14.5
    Uninstalling cffi-1.14.5:
      Successfully uninstalled cffi-1.14.5
  Attempting uninstall: soundfile
    Found existing installation: SoundFile 0.10.3.post1
    Uninstalling SoundFile-0.10.3.post1:
      Successfully uninstalled SoundFile-0.10.3.post1
  Attempting uninstall: joblib
    Found existing installation: joblib 1.0.1
    Uninstalling joblib-1.0.1:
      Successfully uninstalled joblib-1.0.1
  Attempting uninstall: decorator
    Found existing installation: decorator 4.4.2
    Uninstalling decorator-4.4.2:
      Successfully uninstalled decorator-4.4.2
  Attempting uninstall: appdirs
    Found existing installation: appdirs 1.4.4
    Uninstalling appdirs-1.4.4:
      Successfully uninstalled appdirs-1.4.4
  Attempting uninstall: pooch
    Found existing installation: pooch 1.4.0
    Uninstalling pooch-1.4.0:
      Successfully uninstalled pooch-1.4.0
  Attempting uninstall: audioread
    Found existing installation: audioread 2.1.9
    Uninstalling audioread-2.1.9:
      Successfully uninstalled audioread-2.1.9
  Attempting uninstall: resampy
    Found existing installation: resampy 0.2.2
    Uninstalling resampy-0.2.2:
      Successfully uninstalled resampy-0.2.2
  Attempting uninstall: scikit-learn
    Found existing installation: scikit-learn 0.22.2.post1
    Uninstalling scikit-learn-0.22.2.post1:
      Successfully uninstalled scikit-learn-0.22.2.post1
  Attempting uninstall: librosa
    Found existing installation: librosa 0.7.2
    Uninstalling librosa-0.7.2:
      Successfully uninstalled librosa-0.7.2
  Attempting uninstall: keras-preprocessing
    Found existing installation: Keras-Preprocessing 1.1.2
    Uninstalling Keras-Preprocessing-1.1.2:
      Successfully uninstalled Keras-Preprocessing-1.1.2
  Attempting uninstall: wrapt
    Found existing installation: wrapt 1.12.1
    Uninstalling wrapt-1.12.1:
      Successfully uninstalled wrapt-1.12.1
  Attempting uninstall: protobuf
    Found existing installation: protobuf 3.12.4
    Uninstalling protobuf-3.12.4:
      Successfully uninstalled protobuf-3.12.4
  Attempting uninstall: google-pasta
    Found existing installation: google-pasta 0.2.0
    Uninstalling google-pasta-0.2.0:
      Successfully uninstalled google-pasta-0.2.0
  Attempting uninstall: cached-property
    Found existing installation: cached-property 1.5.2
    Uninstalling cached-property-1.5.2:
      Successfully uninstalled cached-property-1.5.2
  Attempting uninstall: h5py
    Found existing installation: h5py 3.1.0
    Uninstalling h5py-3.1.0:
      Successfully uninstalled h5py-3.1.0
  Attempting uninstall: gast
    Found existing installation: gast 0.4.0
    Uninstalling gast-0.4.0:
      Successfully uninstalled gast-0.4.0
  Attempting uninstall: wheel
    Found existing installation: wheel 0.34.2
    Uninstalling wheel-0.34.2:
      Successfully uninstalled wheel-0.34.2
  Attempting uninstall: tensorflow-estimator
    Found existing installation: tensorflow-estimator 2.5.0
    Uninstalling tensorflow-estimator-2.5.0:
      Successfully uninstalled tensorflow-estimator-2.5.0
  Attempting uninstall: termcolor
    Found existing installation: termcolor 1.1.0
    Uninstalling termcolor-1.1.0:
      Successfully uninstalled termcolor-1.1.0
  Attempting uninstall: grpcio
    Found existing installation: grpcio 1.34.1
    Uninstalling grpcio-1.34.1:
      Successfully uninstalled grpcio-1.34.1
  Attempting uninstall: markdown
    Found existing installation: Markdown 3.3.4
    Uninstalling Markdown-3.3.4:
      Successfully uninstalled Markdown-3.3.4
  Attempting uninstall: werkzeug
    Found existing installation: Werkzeug 1.0.1
    Uninstalling Werkzeug-1.0.1:
      Successfully uninstalled Werkzeug-1.0.1
  Attempting uninstall: tensorboard
    Found existing installation: tensorboard 2.5.0
    Uninstalling tensorboard-2.5.0:
      Successfully uninstalled tensorboard-2.5.0
  Attempting uninstall: opt-einsum
    Found existing installation: opt-einsum 3.3.0
    Uninstalling opt-einsum-3.3.0:
      Successfully uninstalled opt-einsum-3.3.0
  Attempting uninstall: astor
    Found existing installation: astor 0.8.1
    Uninstalling astor-0.8.1:
      Successfully uninstalled astor-0.8.1
  Attempting uninstall: tensorflow
    Found existing installation: tensorflow 2.5.0
    Uninstalling tensorflow-2.5.0:
      Successfully uninstalled tensorflow-2.5.0
  Running setup.py develop for deepspeech-training
Successfully installed Mako-1.1.4 MarkupSafe-2.0.1 PrettyTable-2.1.0 PyYAML-5.4.1 absl-py-0.13.0 alembic-1.6.5 appdirs-1.4.4 astor-0.8.1 attrdict-2.0.1 attrs-21.2.0 audioread-2.1.9 beautifulsoup4-4.9.3 bs4-0.0.1 cached-property-1.5.2 certifi-2021.5.30 cffi-1.14.5 chardet-4.0.0 cliff-3.8.0 cmaes-0.8.2 cmd2-2.1.1 colorama-0.4.4 colorlog-5.0.1 decorator-5.0.9 deepspeech-training ds-ctcdecoder-0.9.3 gast-0.2.2 google-pasta-0.2.0 greenlet-1.1.0 grpcio-1.38.1 h5py-3.3.0 idna-2.10 importlib-metadata-4.6.0 joblib-1.0.1 keras-applications-1.0.8 keras-preprocessing-1.1.2 librosa-0.8.1 llvmlite-0.31.0 markdown-3.3.4 numba-0.47.0 numpy-1.21.0 opt-einsum-3.3.0 optuna-2.8.0 opuslib-2.0.0 packaging-20.9 pandas-1.2.5 pbr-5.6.0 pooch-1.4.0 progressbar2-3.53.1 protobuf-3.17.3 pycparser-2.20 pyparsing-2.4.7 pyperclip-1.8.2 python-dateutil-2.8.1 python-editor-1.0.4 python-utils-2.5.6 pytz-2021.1 pyxdg-0.27 requests-2.25.1 resampy-0.2.2 scikit-learn-0.24.2 scipy-1.7.0 semver-2.13.0 setuptools-57.0.0 six-1.16.0 soundfile-0.10.3.post1 soupsieve-2.2.1 sox-1.4.1 sqlalchemy-1.4.20 stevedore-3.3.0 tensorboard-1.15.0 tensorflow-1.15.4 tensorflow-estimator-1.15.1 termcolor-1.1.0 threadpoolctl-2.1.0 tqdm-4.61.1 typing-extensions-3.10.0.0 urllib3-1.26.6 wcwidth-0.2.5 werkzeug-2.0.1 wheel-0.36.2 wrapt-1.12.1 zipp-3.4.1

The colab notebook will be restarted by running the cell. Continue by running the below cells after the colab has restarted

In [ ]:
!nvcc --version
!nvidia-smi

# Restarting the Runtine, run only below cells after colab has restarted
import os
os.kill(os.getpid(), 9)
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0

Set default CUDA version

  • A input will be asking a confirmation for changing CUDA, Press Y
In [1]:
# Default CUDA version in Colab is 10.1, need to change to 10.0

! echo $PATH

import os
os.environ['PATH'] += ":/usr/local/cuda-10.0/bin"
os.environ['CUDADIR'] = "/usr/local/cuda-10.0"
os.environ['LD_LIBRARY_PATH'] = "/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64"

!echo $PATH
!echo $LD_LIBRARY_PATH
!source ~/.bashrc

!env | grep -i cuda

%cd /content/
!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
!sudo apt-get install build-essential dkms
!sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub

!sudo apt-get update
!sudo apt-get install cuda-10-0

!sudo rm /usr/local/cuda
!sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda
%ls -l /usr/local/

!pip3 uninstall tensorflow -y
!pip3 install 'tensorflow-gpu==1.15.2'
/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin
/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/cuda-10.0/bin
/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64
LD_LIBRARY_PATH=/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64
CUDADIR=/usr/local/cuda-10.0
LIBRARY_PATH=/usr/local/cuda/lib64/stubs
CUDA_VERSION=11.0.3
NVIDIA_REQUIRE_CUDA=cuda>=11.0 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/cuda-10.0/bin
/content
--2021-06-29 17:16:03--  https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 152.195.19.142
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.195.19.142|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2940 (2.9K) [application/x-deb]
Saving to: ‘cuda-repo-ubuntu1804_10.0.130-1_amd64.deb’

cuda-repo-ubuntu180 100%[===================>]   2.87K  --.-KB/s    in 0s      

2021-06-29 17:16:03 (174 MB/s) - ‘cuda-repo-ubuntu1804_10.0.130-1_amd64.deb’ saved [2940/2940]

Reading package lists... Done
Building dependency tree       
Reading state information... Done
libxi-dev is already the newest version (2:1.7.9-1).
libxi-dev set to manually installed.
libxmu-dev is already the newest version (2:1.1.2-2).
libxmu-dev set to manually installed.
freeglut3 is already the newest version (2.8.1-3).
freeglut3 set to manually installed.
freeglut3-dev is already the newest version (2.8.1-3).
freeglut3-dev set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
dkms is already the newest version (2.3-3ubuntu9.7).
dkms set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded.
Selecting previously unselected package cuda-repo-ubuntu1804.
(Reading database ... 160942 files and directories currently installed.)
Preparing to unpack cuda-repo-ubuntu1804_10.0.130-1_amd64.deb ...
Unpacking cuda-repo-ubuntu1804 (10.0.130-1) ...
Setting up cuda-repo-ubuntu1804 (10.0.130-1) ...

Configuration file '/etc/apt/sources.list.d/cuda.list'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** cuda.list (Y/I/N/O/D/Z) [default=N] ? y
Installing new version of config file /etc/apt/sources.list.d/cuda.list ...
Executing: /tmp/apt-key-gpghome.V6c8ZqRr77/gpg.1.sh --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
gpg: requesting key from 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub'
gpg: key F60F4B3D7FA2AF80: "cudatools <cudatools@nvidia.com>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
Ign:1 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  InRelease
Get:2 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Release [697 B]
Get:3 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Release.gpg [836 B]
Get:4 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]
Ign:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  InRelease
Hit:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  Release
Get:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Ign:8 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Packages
Get:8 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Packages [630 kB]
Get:9 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]
Hit:11 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:12 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [473 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Hit:14 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Get:15 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,221 kB]
Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,418 kB]
Get:17 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease [15.9 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,656 kB]
Get:20 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]
Get:21 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,777 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,188 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [506 kB]
Get:24 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [909 kB]
Get:25 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic/main amd64 Packages [40.9 kB]
Get:26 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [41.5 kB]
Fetched 13.2 MB in 5s (2,683 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
cuda-10-0 is already the newest version (10.0.130-1).
0 upgraded, 0 newly installed, 0 to remove and 84 not upgraded.
total 80
drwxr-xr-x  1 root root 4096 Jun 29 17:15 bin/
lrwxrwxrwx  1 root root   20 Jun 29 17:17 cuda -> /usr/local/cuda-10.0/
drwxr-xr-x 16 root root 4096 Jun 15 13:23 cuda-10.0/
drwxr-xr-x 15 root root 4096 Jun 15 13:25 cuda-10.1/
drwxr-xr-x  1 root root 4096 Jun 15 13:28 cuda-11.0/
drwxr-xr-x  1 root root 4096 Jun 17 13:30 etc/
drwxr-xr-x  2 root root 4096 Sep 21  2020 games/
drwxr-xr-x  2 root root 4096 Jun 17 13:41 _gcs_config_ops.so/
drwxr-xr-x  1 root root 4096 Jun 17 13:49 include/
drwxr-xr-x  1 root root 4096 Jun 17 13:49 lib/
-rw-r--r--  1 root root 1636 Jun 17 13:43 LICENSE.txt
drwxr-xr-x  3 root root 4096 Jun 17 13:40 licensing/
lrwxrwxrwx  1 root root    9 Sep 21  2020 man -> share/man/
drwxr-xr-x  2 root root 4096 Sep 21  2020 sbin/
-rw-r--r--  1 root root 7291 Jun 17 13:43 setup.cfg
drwxr-xr-x  1 root root 4096 Jun 17 13:40 share/
drwxr-xr-x  2 root root 4096 Sep 21  2020 src/
drwxr-xr-x  2 root root 4096 Jun 17 13:51 xgboost/
Found existing installation: tensorflow 1.15.4
Uninstalling tensorflow-1.15.4:
  Successfully uninstalled tensorflow-1.15.4
Collecting tensorflow-gpu==1.15.2
  Downloading tensorflow_gpu-1.15.2-cp37-cp37m-manylinux2010_x86_64.whl (410.9 MB)
     |████████████████████████████████| 410.9 MB 34 kB/s 
Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.15.0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.8.1)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.21.0)
Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.15.1)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.13.0)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (3.17.3)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.12.1)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.36.2)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.0.8)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.16.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.1.2)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (1.38.1)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.2.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (3.3.0)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu==1.15.2) (0.2.2)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (57.0.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (2.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.3.4)
Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==1.15.2) (3.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (4.6.0)
Requirement already satisfied: cached-property; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from h5py->keras-applications>=1.0.8->tensorflow-gpu==1.15.2) (1.5.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow-gpu==1.15.2) (3.10.0.0)
Installing collected packages: tensorflow-gpu
Successfully installed tensorflow-gpu-1.15.2

Importing Libraries 💻

In [2]:
# Importing Libraries
import pandas as pd
import re
from ast import literal_eval
import os
import librosa


# To make things more beautiful! 
from rich.console import Console
from rich.table import Table
from rich import pretty
pretty.install()
from IPython.display import Audio


import numpy as np
import json
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession

import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
DATA_FOLDER = "data"
In [3]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive

Training phase ⚙️

Downloading Dataset

Same as previous challenges, we need to download the dataset using AIcrowd CLI

In [4]:
API_KEY = '' # Please get your your API Key from [https://www.aicrowd.com/participants/me]
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [5]:
# Downloading the Dataset
!rm -rf data
!mkdir data

!aicrowd dataset download --challenge sound-prediction -j 3 -o data
train.csv: 100% 713k/713k [00:00<00:00, 5.80MB/s]
test.zip:   0% 0.00/160M [00:00<?, ?B/s]
train.zip:   0% 0.00/643M [00:00<?, ?B/s]
train.zip:   5% 33.6M/643M [00:00<00:08, 75.8MB/s]

test.csv: 100% 159k/159k [00:00<00:00, 2.00MB/s]

train.zip:  10% 67.1M/643M [00:00<00:06, 91.5MB/s]

val.csv: 100% 69.1k/69.1k [00:00<00:00, 1.78MB/s]

train.zip:  16% 101M/643M [00:01<00:05, 99.3MB/s] 
test.zip:  21% 33.6M/160M [00:01<00:07, 17.3MB/s]
train.zip:  26% 168M/643M [00:01<00:05, 87.6MB/s]
train.zip:  31% 201M/643M [00:02<00:04, 93.6MB/s]
train.zip:  37% 235M/643M [00:02<00:04, 99.8MB/s]

val.zip:   0% 0.00/63.9M [00:00<?, ?B/s]
train.zip:  42% 268M/643M [00:02<00:03, 101MB/s] 
test.zip:  42% 67.1M/160M [00:03<00:04, 19.3MB/s]
train.zip:  52% 336M/643M [00:03<00:03, 84.4MB/s]
train.zip:  57% 369M/643M [00:04<00:02, 92.3MB/s]
train.zip:  63% 403M/643M [00:04<00:02, 92.4MB/s]
train.zip:  68% 436M/643M [00:04<00:02, 94.3MB/s]
train.zip:  73% 470M/643M [00:05<00:01, 98.5MB/s]
test.zip:  63% 101M/160M [00:05<00:03, 17.3MB/s] 
train.zip:  83% 537M/643M [00:05<00:01, 101MB/s]
train.zip:  89% 570M/643M [00:05<00:00, 103MB/s]
train.zip:  94% 604M/643M [00:06<00:00, 101MB/s]
train.zip: 100% 643M/643M [00:06<00:00, 96.5MB/s]
test.zip: 100% 160M/160M [00:08<00:00, 18.5MB/s]


val.zip:  53% 33.6M/63.9M [00:05<00:05, 5.65MB/s]

val.zip: 100% 63.9M/63.9M [00:09<00:00, 6.55MB/s]

Unzipping Files

In [6]:
# Unzipping the zip files into the respective set folders
!unzip /content/data/train.zip  -d /content/data/train >/dev/null
!unzip /content/data/val.zip -d /content/data/val >/dev/null
!unzip /content/data/test.zip -d /content/data/test >/dev/null

Reading the Dataset

In [7]:
train_df = pd.read_csv(os.path.join(DATA_FOLDER, "train.csv"))
val_df = pd.read_csv(os.path.join(DATA_FOLDER, "val.csv"))
test_df = pd.read_csv(os.path.join(DATA_FOLDER, "test.csv"))

train_df
Out[7]:
SoundID label
0 0 efficient spatialtemporal context modeling for
1 1 on the space
2 2 baryogenesis through mixing
3 3 noncommutative gravity in three dimensions
4 4 effective thermal diffusivity in
... ... ...
19995 19995 dixmier trace for
19996 19996 removahedral congruences versus permutree cong...
19997 19997 viscous control of minimum
19998 19998 new boundary harnack inequalities with
19999 19999 a dynamic systems

20000 rows × 2 columns

Preprocessing the Dataset

In this section, we are going to add some necessary columns whcich DeepSpeech will need while model training

In [8]:
# Preprocessing Dataset Function
def preprocess_data(df, set_name):

  # Adding the Wav filepath 
  df['wav_filename'] = df['SoundID'].apply(lambda x : os.path.join("/content", "data", set_name+"/" +str(x) + ".wav"))
  
  df['transcript'] = df['label']
  
  # Addding the wav file size ( in bytes ), due to mos of the files are around 30,000 bytes, there is not much need put that 
  # But you can do it you want :)
  df['wav_filesize'] = 30000

  return df
In [9]:
# Preprocessing all three sets
train_df = preprocess_data(train_df, "train")
val_df = preprocess_data(val_df, "val")
test_df = preprocess_data(test_df, "test")
val_df
Out[9]:
SoundID label wav_filename transcript wav_filesize
0 0 injectivity in higher order /content/data/val/0.wav injectivity in higher order 30000
1 1 minimal constraints in the parity /content/data/val/1.wav minimal constraints in the parity 30000
2 2 learning to refer /content/data/val/2.wav learning to refer 30000
3 3 on the expressive power /content/data/val/3.wav on the expressive power 30000
4 4 small parts in the bernoulli /content/data/val/4.wav small parts in the bernoulli 30000
... ... ... ... ... ...
1995 1995 responses of small quantum systems /content/data/val/1995.wav responses of small quantum systems 30000
1996 1996 thermal rectification in quantum /content/data/val/1996.wav thermal rectification in quantum 30000
1997 1997 decomposition and unitarity in quantum /content/data/val/1997.wav decomposition and unitarity in quantum 30000
1998 1998 on gravitational collapse in /content/data/val/1998.wav on gravitational collapse in 30000
1999 1999 cogrowth and spectral gap /content/data/val/1999.wav cogrowth and spectral gap 30000

2000 rows × 5 columns

Sound

Listening to some sounds with with respctive labels

In [10]:
# Getting a sample from the dataset
example = train_df.iloc[10, :]

# Reading the sound using the path
sound, sample_rate = librosa.load(example['wav_filename'])

("Sound : ", sound), ("Label : ", sample_rate)
(
    ('Sound : ', array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)),
    ('Label : ', 22050)
)

The sound is a 1D list with each value is the amplitude of the sound. And the sample_rate is show many of the sound array elements are going through the speaker in each second.

Note : Lower Your PC Volume :)

In [11]:
Audio(example['wav_filename'])
Out[11]:
In [12]:
example['transcript']
'cold bosons in optical lattices'
Out[12]:
In [13]:
# Saving the preprocessing dataset
train_df.to_csv("deepspeech_train.csv", index=False)
val_df.to_csv("deepspeech_val.csv", index=False)
test_df.to_csv("deepspeech_test.csv", index=False)
In [25]:
! wget -c https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-checkpoint.tar.gz
! rm -r deepspeech-0.9.3-checkpoint
! tar -xvzf deepspeech-0.9.3-checkpoint.tar.gz
--2021-06-29 17:26:36--  https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-checkpoint.tar.gz
Resolving github.com (github.com)... 192.30.255.113
Connecting to github.com (github.com)|192.30.255.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/60273704/6598e800-3b0f-11eb-9e91-3db57dd0c70b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T172636Z&X-Amz-Expires=300&X-Amz-Signature=86ce77c5bd51641e1483011f8965d11419b7eb2fad92432de478d3b3eba7e4a8&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Ddeepspeech-0.9.3-checkpoint.tar.gz&response-content-type=application%2Foctet-stream [following]
--2021-06-29 17:26:36--  https://github-releases.githubusercontent.com/60273704/6598e800-3b0f-11eb-9e91-3db57dd0c70b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210629T172636Z&X-Amz-Expires=300&X-Amz-Signature=86ce77c5bd51641e1483011f8965d11419b7eb2fad92432de478d3b3eba7e4a8&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=60273704&response-content-disposition=attachment%3B%20filename%3Ddeepspeech-0.9.3-checkpoint.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.109.154, 185.199.110.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 645992216 (616M) [application/octet-stream]
Saving to: ‘deepspeech-0.9.3-checkpoint.tar.gz’

deepspeech-0.9.3-ch 100%[===================>] 616.07M  65.9MB/s    in 12s     

2021-06-29 17:26:49 (50.5 MB/s) - ‘deepspeech-0.9.3-checkpoint.tar.gz’ saved [645992216/645992216]

rm: cannot remove 'deepspeech-0.9.3-checkpoint': No such file or directory
deepspeech-0.9.3-checkpoint/
deepspeech-0.9.3-checkpoint/flags.txt
deepspeech-0.9.3-checkpoint/best_dev-1466475.meta
deepspeech-0.9.3-checkpoint/best_dev-1466475.index
deepspeech-0.9.3-checkpoint/best_dev_checkpoint
deepspeech-0.9.3-checkpoint/best_dev-1466475.data-00000-of-00001
deepspeech-0.9.3-checkpoint/checkpoint
In [21]:
! ls /root/.local/share/deepspeech/checkpoints/
ls: cannot access '/root/.local/share/deepspeech/checkpoints/': No such file or directory
In [ ]:
# ! pip uninstall numpy
# ! pip install numpy==1.19.5

Training the model + Validation + Testing

Now, using Deep Speech command line, we are going to put the path of dataset with various other parameters to train & validation every epoch, but test after all epochs are done!

In [33]:
%cd DeepSpeech  

# We are going to use validation data instead of training because training will take a lot more time
# Putting the data files
# Setting up Model parameters
# Setting up the batch size and audo sample rate
# Using mixed precision so that the model will train faster
# Saving the test predictions

!python DeepSpeech.py --checkpoint_dir deepspeech-0.9.3-checkpoint/ \
--train_files ../deepspeech_train.csv,../deepspeech_val.csv \
--dev_files ../deepspeech_val.csv \
--test_files ../deepspeech_test.csv \
--n_hidden 2048 \
--train_cudnn True \
--train_batch_size 64 \
--dev_batch_size 32 --test_batch_size 128 \
--automatic_mixed_precision True --epochs 1 \
--test_output_file ../assets/output.txt \
--learning_rate 0.0001 \
--audio_sample_rate 8000

%cd ..

# --checkpoint_dir /root/.local/share/deepspeech/checkpoints/
/content/DeepSpeech
I0629 17:54:19.538950 140701439002496 utils.py:157] NumExpr defaulting to 2 threads.
I Enabling automatic mixed precision training.
I Loading best validating checkpoint from /root/.local/share/deepspeech/checkpoints/best_dev-1868
I Loading variable from checkpoint: cond_1/beta1_power
I Loading variable from checkpoint: cond_1/beta2_power
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel/Adam
I Loading variable from checkpoint: cudnn_lstm/opaque_kernel/Adam_1
I Loading variable from checkpoint: current_loss_scale
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: good_steps
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/bias/Adam
I Loading variable from checkpoint: layer_1/bias/Adam_1
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_1/weights/Adam
I Loading variable from checkpoint: layer_1/weights/Adam_1
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/bias/Adam
I Loading variable from checkpoint: layer_2/bias/Adam_1
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_2/weights/Adam
I Loading variable from checkpoint: layer_2/weights/Adam_1
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/bias/Adam
I Loading variable from checkpoint: layer_3/bias/Adam_1
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_3/weights/Adam
I Loading variable from checkpoint: layer_3/weights/Adam_1
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/bias/Adam
I Loading variable from checkpoint: layer_5/bias/Adam_1
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_5/weights/Adam
I Loading variable from checkpoint: layer_5/weights/Adam_1
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/bias/Adam
I Loading variable from checkpoint: layer_6/bias/Adam_1
I Loading variable from checkpoint: layer_6/weights
I Loading variable from checkpoint: layer_6/weights/Adam
I Loading variable from checkpoint: layer_6/weights/Adam_1
I Loading variable from checkpoint: learning_rate
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:02:44 | Steps: 343 | Loss: 14.051739     
Epoch 0 | Validation | Elapsed Time: 0:00:07 | Steps: 63 | Loss: 15.287244 | Dataset: ../deepspeech_val.csv
I Saved new best validating model with loss 15.287244 to: /root/.local/share/deepspeech/checkpoints/best_dev-2211
--------------------------------------------------------------------------------
I FINISHED optimization in 0:02:57.778914
I Loading best validating checkpoint from /root/.local/share/deepspeech/checkpoints/best_dev-2211
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: global_step
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
Testing model on ../deepspeech_test.csv
Test epoch | Steps: 40 | Elapsed Time: 0:54:16                                  
Test on ../deepspeech_test.csv - WER: 1.000000, CER: 1.000000, loss: 329.690735
--------------------------------------------------------------------------------
Best WER: 
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.846154, loss: 284.710968
 - wav: file:///content/data/test/3083.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "sudyolinew"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.807692, loss: 207.230637
 - wav: file:///content/data/test/1636.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "afurelyqur"
--------------------------------------------------------------------------------
WER: 1.000000, CER: 0.769231, loss: 180.604477
 - wav: file:///content/data/test/2320.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "acidpierimistrcs"
--------------------------------------------------------------------------------
WER: 2.000000, CER: 0.884615, loss: 372.877899
 - wav: file:///content/data/test/4570.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "weaklensing bytriaxial"
--------------------------------------------------------------------------------
WER: 2.000000, CER: 0.884615, loss: 362.290955
 - wav: file:///content/data/test/2329.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "fastcovariante estimation"
--------------------------------------------------------------------------------
Median WER: 
--------------------------------------------------------------------------------
WER: 4.000000, CER: 1.230769, loss: 286.071472
 - wav: file:///content/data/test/585.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "mesurunge rut ind pridicingtrajector"
--------------------------------------------------------------------------------
WER: 4.000000, CER: 0.961538, loss: 285.925781
 - wav: file:///content/data/test/178.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "intoding curves space tin"
--------------------------------------------------------------------------------
WER: 4.000000, CER: 0.923077, loss: 285.855103
 - wav: file:///content/data/test/2115.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "shot range and hig"
--------------------------------------------------------------------------------
WER: 4.000000, CER: 0.923077, loss: 285.777008
 - wav: file:///content/data/test/4246.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "aspects of quantum coleng"
--------------------------------------------------------------------------------
WER: 4.000000, CER: 0.923077, loss: 285.504028
 - wav: file:///content/data/test/3193.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "fihary ryacie asirvie o"
--------------------------------------------------------------------------------
Worst WER: 
--------------------------------------------------------------------------------
WER: 8.000000, CER: 1.153846, loss: 267.655518
 - wav: file:///content/data/test/798.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "towrd al fuancr pol form fact r of"
--------------------------------------------------------------------------------
WER: 8.000000, CER: 1.192308, loss: 221.071198
 - wav: file:///content/data/test/1938.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "icroedior in ens in tecep c on der"
--------------------------------------------------------------------------------
WER: 8.000000, CER: 1.192308, loss: 219.420547
 - wav: file:///content/data/test/4514.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "asyn foter sfactral moloys is o icen o"
--------------------------------------------------------------------------------
WER: 9.000000, CER: 1.307692, loss: 239.906021
 - wav: file:///content/data/test/4116.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "cor es t model in noblinear ry real one"
--------------------------------------------------------------------------------
WER: 10.000000, CER: 1.576923, loss: 319.345032
 - wav: file:///content/data/test/2254.wav
 - src: "abcdefghijklmnopqrstuvwxyz"
 - res: "tmi cro agetics in the ations in tero agnetis yg"
--------------------------------------------------------------------------------
/content

Getting the Predictions

In the previous command, we saved the testing results as outputs.txt in assets folder. Let's read the file and convert the outputs into the .csv format.

In [34]:
# Reading the output.txt file
data = open(os.path.join("assets", "output.txt"))
output = data.read()

# Convert the text into python list
output = literal_eval(output)
In [35]:
# Getting the sound and respective label for submission
SoundID = [int(sample['wav_filename'].split("/")[-1].split(".")[0])  for sample in output]
label = [sample['res']  for sample in output]
print(SoundID[0], label[0])
3083 sudyolinew
In [51]:
test_df['SoundID'] = SoundID
test_df['label'] = label
test_df
Out[51]:
SoundID label wav_filename transcript wav_filesize
631 3083 sudyolinew /content/data/test/631.wav abcdefghijklmnopqrstuvwxyz 30000
2524 1636 afurelyqur /content/data/test/2524.wav abcdefghijklmnopqrstuvwxyz 30000
674 2320 acidpierimistrcs /content/data/test/674.wav abcdefghijklmnopqrstuvwxyz 30000
4162 4570 weaklensing bytriaxial /content/data/test/4162.wav abcdefghijklmnopqrstuvwxyz 30000
1695 2329 fastcovariante estimation /content/data/test/1695.wav abcdefghijklmnopqrstuvwxyz 30000
... ... ... ... ... ...
3744 798 towrd al fuancr pol form fact r of /content/data/test/3744.wav abcdefghijklmnopqrstuvwxyz 30000
2903 1938 icroedior in ens in tecep c on der /content/data/test/2903.wav abcdefghijklmnopqrstuvwxyz 30000
2999 4514 asyn foter sfactral moloys is o icen o /content/data/test/2999.wav abcdefghijklmnopqrstuvwxyz 30000
792 4116 cor es t model in noblinear ry real one /content/data/test/792.wav abcdefghijklmnopqrstuvwxyz 30000
4047 2254 tmi cro agetics in the ations in tero agnetis yg /content/data/test/4047.wav abcdefghijklmnopqrstuvwxyz 30000

5000 rows × 5 columns

In [53]:
# It is recommended to sort your columns before making the submission
test_df = test_df.sort_values("SoundID")
test_df
Out[53]:
SoundID label wav_filename transcript wav_filesize
1527 0 erernalysis for probabilities /content/data/test/1527.wav abcdefghijklmnopqrstuvwxyz 30000
844 1 saely cisompetions of universal /content/data/test/844.wav abcdefghijklmnopqrstuvwxyz 30000
4226 2 fixe points of /content/data/test/4226.wav abcdefghijklmnopqrstuvwxyz 30000
836 3 ceometry of wagranggian brasen int /content/data/test/836.wav abcdefghijklmnopqrstuvwxyz 30000
408 4 creation and dansiong of /content/data/test/408.wav abcdefghijklmnopqrstuvwxyz 30000
... ... ... ... ... ...
3363 4995 plane waves with lea singularities /content/data/test/3363.wav abcdefghijklmnopqrstuvwxyz 30000
2630 4996 iteractive emscy us asatule /content/data/test/2630.wav abcdefghijklmnopqrstuvwxyz 30000
2449 4997 liceisntesis in e oscaismit /content/data/test/2449.wav abcdefghijklmnopqrstuvwxyz 30000
3426 4998 search for havy /content/data/test/3426.wav abcdefghijklmnopqrstuvwxyz 30000
2872 4999 esim igs boson serches in /content/data/test/2872.wav abcdefghijklmnopqrstuvwxyz 30000

5000 rows × 5 columns

In [40]:
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
--2021-06-29 18:52:35--  http://setup.johnsnowlabs.com/colab.sh
Resolving setup.johnsnowlabs.com (setup.johnsnowlabs.com)... 51.158.130.125
Connecting to setup.johnsnowlabs.com (setup.johnsnowlabs.com)|51.158.130.125|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/scripts/colab_setup.sh [following]
--2021-06-29 18:52:35--  https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/scripts/colab_setup.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1608 (1.6K) [text/plain]
Saving to: ‘STDOUT’

-                   100%[===================>]   1.57K  --.-KB/s    in 0s      

2021-06-29 18:52:35 (45.1 MB/s) - written to stdout [1608/1608]

setup Colab for PySpark 3.0.3 and Spark NLP 3.1.1
Ign:1 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  InRelease
Hit:2 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Release
Hit:3 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease
Ign:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  InRelease
Hit:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  Release
Get:6 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:8 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:9 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Hit:12 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Get:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Hit:14 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease
Hit:15 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease
Get:16 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,777 kB]
Get:17 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [909 kB]
Fetched 2,953 kB in 4s (810 kB/s)
Reading package lists... Done
     |████████████████████████████████| 209.1 MB 69 kB/s 
     |████████████████████████████████| 45 kB 2.1 MB/s 
     |████████████████████████████████| 198 kB 72.1 MB/s 
  Building wheel for pyspark (setup.py) ... done
In [42]:

In [43]:
spark = sparknlp.start()
In [44]:
document_assembler = DocumentAssembler()\
  .setInputCol("text")\
  .setOutputCol("document")

tokenizer = RecursiveTokenizer()\
  .setInputCols(["document"])\
  .setOutputCol("token")\
  .setPrefixes(["\"", "(", "[", "\n"])\
  .setSuffixes([".", ",", "?", ")","!", "‘s"])

spell_model = ContextSpellCheckerModel\
    .pretrained('spellcheck_dl')\
    .setInputCols("token")\
    .setOutputCol("corrected")

finisher = Finisher().setInputCols("corrected")

light_pipeline = Pipeline(stages = [
                                    document_assembler,
                                    tokenizer,
                                    spell_model,
                                    finisher
                                    ])
## For comparison
full_pipeline = Pipeline(
    stages = [
              document_assembler,
              tokenizer,
              spell_model
  ])

empty_ds = spark.createDataFrame([[""]]).toDF("text")
pipeline_model = full_pipeline.fit(empty_ds)
l_pipeline_model = LightPipeline(light_pipeline.fit(empty_ds))
spellcheck_dl download started this may take some time.
Approximate size to download 111.4 MB
[OK!]
In [45]:
df = spark.createDataFrame(pd.DataFrame({"text": all_labels}))
# result = pipeline_model.transform(df)
In [46]:
all_labels_corrected = []
for i, sent in enumerate(all_labels):
  light_result = l_pipeline_model.annotate(sent)
  corr = ' '.join(light_result['corrected']) 
  all_labels_corrected.append(corr)
  # print(sent, corr)
  if i%100==0:
    print(i)
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
1300
1400
1500
1600
1700
1800
1900
2000
2100
2200
2300
2400
2500
2600
2700
2800
2900
3000
3100
3200
3300
3400
3500
3600
3700
3800
3900
4000
4100
4200
4300
4400
4500
4600
4700
4800
4900
In [47]:
test_df['label'] = all_labels_corrected
In [48]:
test_df
Out[48]:
SoundID label wav_filename transcript wav_filesize
631 0 reanalysis for probabilities /content/data/test/631.wav abcdefghijklmnopqrstuvwxyz 30000
2524 1 sadly cisompetions of universal /content/data/test/2524.wav abcdefghijklmnopqrstuvwxyz 30000
674 2 five points of /content/data/test/674.wav abcdefghijklmnopqrstuvwxyz 30000
4162 3 geometry of wagranggian brazen in /content/data/test/4162.wav abcdefghijklmnopqrstuvwxyz 30000
1695 4 creation and mansions of /content/data/test/1695.wav abcdefghijklmnopqrstuvwxyz 30000
... ... ... ... ... ...
3744 4995 plane waves with lea similarities /content/data/test/3744.wav abcdefghijklmnopqrstuvwxyz 30000
2903 4996 interactive easy us astute /content/data/test/2903.wav abcdefghijklmnopqrstuvwxyz 30000
2999 4997 liceisntesis in e oscaismit /content/data/test/2999.wav abcdefghijklmnopqrstuvwxyz 30000
792 4998 search for have /content/data/test/792.wav abcdefghijklmnopqrstuvwxyz 30000
4047 4999 him is Koson searches in /content/data/test/4047.wav abcdefghijklmnopqrstuvwxyz 30000

5000 rows × 5 columns

Note : Please make sure that there should be filename submission.csv in assets folder before submitting it

In [49]:
# Saving the sample submission in assets directory
test_df.to_csv(os.path.join("assets", "submission.csv"), index=False)

Submit to AIcrowd 🚀

Note : Please save the notebook before submitting it (Ctrl + S)

In [50]:
!aicrowd notebook submit -c sound-prediction -a assets --no-verify
Using notebook: /content/drive/MyDrive/Colab Notebooks/Speech Recognition load model autocorrect for submission...
Scrubbing API keys from the notebook...
Collecting notebook...
submission.zip ━━━━━━━━━━━━━━━━━━━━━━━━ 100.0%1.1/1.0 MB1.2 MB/s0:00:00
                                                 ╭─────────────────────────╮                                                  
                                                 │ Successfully submitted! │                                                  
                                                 ╰─────────────────────────╯                                                  
                                                       Important links                                                        
┌──────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/sound-prediction/submissions/148982              │
│                  │                                                                                                         │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/sound-prediction/submissions?my_submissions=true │
│                  │                                                                                                         │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/sound-prediction/leaderboards                    │
│                  │                                                                                                         │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-9                                                              │
│                  │                                                                                                         │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-9/problems/sound-prediction                                 │
└──────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Congratulations 🎉 you did it, but there still a lot of improvement that can be made, Changing Hyperparameters seems the first option to start with, have fun!

And btw -

Don't be shy to ask question related to any errors you are getting or doubts in any part of this notebook in discussion forum or in AIcrowd Discord sever, AIcrew will be happy to help you :)

Also, wanna give us your valuable feedback for next blitz or wanna work with us creating blitz challanges ? Let us know!

In [ ]:
! ls /root/.local/share/deepspeech/checkpoints/
best_dev-1875.data-00000-of-00001  train-1250.meta
best_dev-1875.index		   train-1875.data-00000-of-00001
best_dev-1875.meta		   train-1875.index
best_dev_checkpoint		   train-1875.meta
checkpoint			   train-625.data-00000-of-00001
flags.txt			   train-625.index
train-1250.data-00000-of-00001	   train-625.meta
train-1250.index
In [ ]:
! tar -cvf  deepspeech.tar /root/.local/share/deepspeech/
tar: Removing leading `/' from member names
/root/.local/share/deepspeech/
/root/.local/share/deepspeech/summaries/
/root/.local/share/deepspeech/summaries/dev/
/root/.local/share/deepspeech/summaries/dev/events.out.tfevents.1623648076.d5af713e4200
/root/.local/share/deepspeech/summaries/metrics/
/root/.local/share/deepspeech/summaries/train/
/root/.local/share/deepspeech/summaries/train/events.out.tfevents.1623647865.d5af713e4200
/root/.local/share/deepspeech/checkpoints/
/root/.local/share/deepspeech/checkpoints/train-1875.data-00000-of-00001
/root/.local/share/deepspeech/checkpoints/train-625.index
/root/.local/share/deepspeech/checkpoints/train-625.data-00000-of-00001
/root/.local/share/deepspeech/checkpoints/best_dev-1875.meta
/root/.local/share/deepspeech/checkpoints/train-1875.meta
/root/.local/share/deepspeech/checkpoints/best_dev_checkpoint
/root/.local/share/deepspeech/checkpoints/flags.txt
/root/.local/share/deepspeech/checkpoints/best_dev-1875.data-00000-of-00001
/root/.local/share/deepspeech/checkpoints/train-1250.meta
/root/.local/share/deepspeech/checkpoints/train-625.meta
/root/.local/share/deepspeech/checkpoints/checkpoint
/root/.local/share/deepspeech/checkpoints/train-1875.index
/root/.local/share/deepspeech/checkpoints/train-1250.data-00000-of-00001
/root/.local/share/deepspeech/checkpoints/best_dev-1875.index
/root/.local/share/deepspeech/checkpoints/train-1250.index
In [56]:
! cp assets/submission.csv /content/drive/MyDrive/

Comments

You must login before you can post a comment.

Execute