Next step is to understand how to get the AI into an app. I see tensorflow has a couple tutorials on that. I will follow to set one up.
following TensorFlow Lite guide to complete the two steps circled above to make bring the model I built from google colab notebook -> an app format in preparation for integration. (Also considered this source, but I am worried about potential superfluous steps required.)
This ^ github has a good starting package for an iOS app with TensorFlow. It was last updated June of ’19, fairly recent, so I’m hopeful I won’t have too many bugs.
unfortunately I had a slight bug after that first run even though I changed literally one character and then immediately changed it back. I trashed the project, cleared my trashcan and was going to repeat the steps, but tensorFlow has a limit on how many downloads you can do, it would not allow me to run a second package for free (and access their files and GPU etc). So I decided to move on to actually building my AI and worrying about putting the code in an app after.
NEW SOURCE – Jeremy Neiman focuses on building a haiku generator that strictly adhere to the 5-7-5 syllable structure, which posed a problem to the haiku generators of the past, according to his article
most modern haiku don’t adhere to that structure, which means that a training corpus won’t reflect it
“Generating Haiku with Deep Learning (Part 1)” Jeremy Neiman
here is his github: https://github.com/docmarionum1/haikurnn
note he uses recurrent neural networks (RNN) as opposed to convolutional neural networks (CNN)
according to this stack overflow forum, the difference is:
CNN:
- CNN takes a fixed size inputs and generates fixed-size outputs.
- CNN is a type of feed-forward artificial neural network – are variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing.
- CNNs use connectivity pattern between its neurons and is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field.
- CNNs are ideal for images and video processing.
RNN:
- RNN can handle arbitrary input/output lengths.
- RNN unlike feedforward neural networks – can use their internal memory to process arbitrary sequences of inputs.
- Recurrent neural networks use time-series information. i.e. what I spoke last will impact what I will speak next.
- RNNs are ideal for text and speech analysis.
I am using this google colab notebook to feed haikus into the training model. I am using the files made available by Neiman (mentioned above) because he has specifically haiku (5-7-5) poems already compiled and I am storing them on my personal website: abigailtovastein.com/haiku-source
Epoch 1/3
234/234 [==============================] - 5s 20ms/step - loss: 0.0536 - accuracy: 0.9999 - val_loss: 2.5273e-05 - val_accuracy: 1.0000
Epoch 2/3
234/234 [==============================] - 4s 19ms/step - loss: 4.7539e-06 - accuracy: 1.0000 - val_loss: 1.9558e-05 - val_accuracy: 1.0000
Epoch 3/3
234/234 [==============================] - 4s 18ms/step - loss: 4.6387e-06 - accuracy: 1.0000 - val_loss: 1.4532e-05 - val_accuracy: 1.0000
<tensorflow.python.keras.callbacks.History at 0x7f0e67d02588>
Next issue to iron out: as you can see, it is giving itself a 100% accuracy
I added more sources. The next .txt training doc I found came from here (gitHub:geoffbass/Haiku-Generator).
Epoch 1/3
258/258 [==============================] - 5s 20ms/step - loss: 0.2487 - accuracy: 0.9357 - val_loss: 0.1500 - val_accuracy: 0.9554
Epoch 2/3
258/258 [==============================] - 5s 18ms/step - loss: 0.1131 - accuracy: 0.9666 - val_loss: 0.1334 - val_accuracy: 0.9592
Epoch 3/3
258/258 [==============================] - 5s 18ms/step - loss: 0.0887 - accuracy: 0.9756 - val_loss: 0.1353 - val_accuracy: 0.9574
<tensorflow.python.keras.callbacks.History at 0x7f642055dba8>
This accuracy is better…