using
			
			 
			Pkg
			

			
			
			Pkg
			.
			
			activate
			(
			
			"
			../../FastAI.jl/
			"
			)
			

			
			
			Pkg
			.
			
			instantiate
			(
			)

			  Activating project at `~/Desktop/dev/FastAI.jl`

			
			
			
			using
			
			 

	
			Flux

			
			
			
			using
			
			 

	
			FastAI
			,
			
			 
			FastTimeSeries
			,
			
			 

	
			Flux

			┌ Info: Precompiling FastAI [5d0beca9-ade8-49ae-ad0b-a3cf890e669f]
└ @ Base loading.jl:1423
┌ Info: Precompiling FastTimeSeries [5337c758-7610-4451-a331-8357b11df7c6]
└ @ Base loading.jl:1423

TimeSeries Classification


			
			
			
			
			
			data
			,
			 
			blocks
			 
			=
			 
			
			load
			(
			
			

	
			datarecipes
			(
			)
			[
			
			"
			ecg5000
			"
			]
			)
			;

getobs gets us a sample from the TimeSeriesDataset. It returns a tuple with the input time series and the correspodning label.


			
			
			
			
			input
			,
			 
			class
			 
			=
			
			 
			sample
			 
			=
			 
			

	
			getobs
			(
			data
			,
			 
			25
			)

			(Float32[-0.28834122 -2.2725453 … 1.722784 1.2959242], "1")

Now we create a learning task for time-series classification. This means using the time-series to predict labels. We will use the TimeSeriesRow block as input and Label block as the target.


			
			
			
			task
			 
			=
			 
			

	
			SupervisedTask
			(
			
    
			blocks
			,
			
    
			
			(
			
        
			

	
			OneHot
			(
			)
			,
			
        
			

	
			setup
			(
			TSPreprocessing
			,
			 
			
			blocks
			[
			1
			]
			,
			 
			
			
			data
			[
			1
			]
			.
			
			table
			)
			
    
			)
			

			)

			SupervisedTask(TimeSeriesRow -> Label{SubString{String}})

The encodings passed in transform samples into formats suitable as inputs and outputs for a model

Let's check that samples from the created data container conform to the blocks of the learning task:


			
			
			

	
			checkblock
			(
			
			
			task
			.
			
			blocks
			.
			
			sample
			,
			 
			sample
			)

			true

To get an overview of the learning task created, and as a sanity test, we can use describetask. This shows us what encodings will be applied to which blocks, and how the predicted ŷ values are decoded.


			
			
			

	
			describetask
			(
			task
			)

			
			
			
			encoded_sample
			 
			=
			 
			

	
			encodesample
			(
			task
			,
			 
			

	
			Training
			(
			)
			,
			 
			sample
			)

			(Float32[-0.28937635 -2.2807038 … 1.7289687 1.3005764], Bool[1, 0, 0, 0, 0])

Visualization Tools for TimeSeries


			
			
			
			sample
			 
			=
			 
			

	
			getobs
			(
			data
			,
			 
			1
			)

			(Float32[-0.11252183 -2.8272038 … 0.92528623 0.19313742], "1")

			
			
			

	
			showsample
			(
			task
			,
			 
			sample
			)

			
			
			

	
			showblock
			(
			
			blocks
			[
			1
			]
			,
			 
			
			sample
			[
			1
			]
			)

Training

We will use a StackedLSTM as a backbone model, and a Dense layer at the front for classification. taskmodel knows how to do this by looking at the datablocks used.


			
			
			
			
			backbone
			 
			=
			 
			
			
			
			FastTimeSeries
			.
			
			Models
			.
			
			StackedLSTM
			(
			1
			,
			 
			16
			,
			 
			10
			,
			 
			2
			)
			;

			
			
			
			
			model
			 
			=
			 
			
			

	
			FastAI
			.
			

	
			taskmodel
			(
			task
			,
			 
			backbone
			)
			;

We can tasklossfn to get a loss function suitable for our task.


			
			
			
			lossfn
			 
			=
			 
			

	
			tasklossfn
			(
			task
			)

			logitcrossentropy (generic function with 1 method)

Next we create a pair of training and validation data loaders. They take care of batching and loading the data in parallel in the background.


			
			
			
			
			
			traindl
			,
			 
			validdl
			 
			=
			 
			

	
			taskdataloaders
			(
			data
			,
			 
			task
			,
			 
			16
			)
			;

We will use an Adam optimzer for this task.


			
			
			
			optimizer
			 
			=
			 
			
			ADAM
			(
			0.002
			)

			ADAM(0.002, (0.9, 0.999), 1.0e-8, IdDict{Any, Any}())

We create callbacks to get the accuracy during the training


			
			
			
			
			callbacks
			 
			=
			 
			
			[
			

	
			ToGPU
			(
			)
			,
			 
			

	
			Metrics
			(

	
			accuracy
			)
			]
			;

With the addition of an optimizer and a loss function, we can now create a Learner and start training.


			
			
			
			
			learner
			 
			=
			 
			

	
			Learner
			(
			model
			,
			 
			lossfn
			
			;
			 
			
			data
			=
			
			(
			traindl
			,
			 
			validdl
			)
			,
			 
			
			optimizer
			=
			optimizer
			,
			 
			
			callbacks
			 
			=
			 
			callbacks
			)
			;

			
			
			

	
			fitonecycle!
			(
			learner
			,
			 
			10
			,
			 
			0.002
			)

			┌ Info: The GPU function is being called but the GPU is not accessible. 
│ Defaulting back to the CPU. (No action is required if you want to run on the CPU).
└ @ Flux /Users/saksham/.julia/packages/Flux/js6mP/src/functor.jl:192
Epoch 1 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:40
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   1.0 │ 0.95453 │  0.65725 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 1 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   1.0 │ 0.36429 │   0.9082 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 2 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   2.0 │ 0.30034 │   0.9205 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 2 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   2.0 │ 0.28543 │  0.91211 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 3 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬────────┬──────────┐
│         Phase  Epoch    Loss  Accuracy │
├───────────────┼───────┼────────┼──────────┤
│ TrainingPhase │   3.0 │ 0.2677 │  0.92825 │
└───────────────┴───────┴────────┴──────────┘
Epoch 3 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   3.0 │ 0.26776 │  0.91895 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 4 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   4.0 │ 0.23461 │   0.9355 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 4 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   4.0 │ 0.27086 │  0.92285 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 5 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   5.0 │ 0.22571 │   0.9375 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 5 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   5.0 │ 0.24774 │  0.93457 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 6 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   6.0 │ 0.21649 │   0.9385 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 6 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   6.0 │ 0.24026 │  0.93359 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 7 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   7.0 │ 0.21095 │  0.93825 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 7 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   7.0 │ 0.23704 │  0.93262 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 8 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   8.0 │ 0.20555 │  0.93975 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 8 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   8.0 │ 0.24263 │  0.93359 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 9 TrainingPhase(): 100%|██████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │   9.0 │ 0.20291 │  0.94075 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 9 ValidationPhase(): 100%|████████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │   9.0 │ 0.23519 │  0.93457 │
└─────────────────┴───────┴─────────┴──────────┘
Epoch 10 TrainingPhase(): 100%|█████████████████████████| Time: 0:00:03
┌───────────────┬───────┬─────────┬──────────┐
│         Phase  Epoch     Loss  Accuracy │
├───────────────┼───────┼─────────┼──────────┤
│ TrainingPhase │  10.0 │ 0.19846 │    0.942 │
└───────────────┴───────┴─────────┴──────────┘
Epoch 10 ValidationPhase(): 100%|███████████████████████| Time: 0:00:00
┌─────────────────┬───────┬─────────┬──────────┐
│           Phase  Epoch     Loss  Accuracy │
├─────────────────┼───────┼─────────┼──────────┤
│ ValidationPhase │  10.0 │ 0.23493 │  0.93457 │
└─────────────────┴───────┴─────────┴──────────┘

We can save the model for later inference using savetaskmodel:


			
			
			

	
			savetaskmodel
			(
			
			"
			tsclassification.jld2
			"
			,
			 
			task
			,
			 
			
			learner
			.
			
			model
			
			;
			 
			
			force
			 
			=
			 
			true
			)