[ sublime ] reek and rubocop PATH problem in macOS [ SOLVED ]

Problem

/usr/bin/ruby: No such file or directory -- rubocop (LoadError)

or

/usr/bin/ruby: No such file or directory -- reek (LoadError)

 

Solution

1.add PATH variable

"paths": {
"linux": [],
"osx": [
"/Users/phongsathorneakamongul/.rvm/rubies/ruby-2.1.2/bin"
],
"windows": []
},

2. If still face LoadError, despite the ruby path is exists.

ongsathorneakamongul/.rvm/rubies/ruby-2.1.2/bin/ruby: No such file or directory -- reek (LoadError)[/code]

Try install Fix Mac Path from Package Manager

 

[ Deep Q ] Convolutional neural networks

CNN is successfully been applied to analyzing visual imagery.

  • Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
  • Mathematically it is a cross-correlation rather than a convolution

 

  1. Resize image to 80×80
  2. Stack last 4 frames to produce an 80x80x4 input array for network

network.png

The final output layer has the same dimensionality as the number of valid actions which can be performed in the game, where the 0th index always corresponds to doing nothing.

687474703a2f2f696d6775722e636f6d2f6d6661745172592e706e67.png

 

Reinforcement learning

supervised learning : target label for each training example

unsupervised learning : no labels at all

reinforcement learning : has sparse and time-delayed labels – the rewards. Based only on those rewards the agent has to learn to behave in the environment.

Markov Decision Process

environment = state (e.g. location of the paddle, location and direction of the ball, existence of every brick and so on).

agent can perform certain actions in the environment (e.g. move the paddle to the left or to the right).

These actions sometimes result in a reward (e.g. increase in score).

Actions transform the environment and lead to a new state, where the agent can perform another action, and so on. reinforcement-learning-an-introduction.png

One episode of this process (e.g. one game) forms a finite sequence of states, actions and rewards : s0 a0 r1 … sn-1 an-1 rn sn.

A Markov decision process relies on the Markov assumption, that the probability of the next state si+1 depends only on current state si and action ai, but not on preceding states or actions.

Total reward for one episode : R = r1 + … + rn

Total future reward from time point t onward : Rt = rt + rt+1 + … + rn

 

Discounted future reward

decision-process-model

where γ = discount factor between 0 and 1, the more into the future the reward is, the less we take it into consideration.

process-of-mdp.png

If we set the discount factor γ=0, then our strategy will be short-sighted and we rely only on the **immediate** rewards ( Rt = rt ). If we want to balance between **immediate** and **future** rewards, we should set discount factor to something like γ=0.9.

If our environment is *deterministic* and the same actions always result in same rewards, then we can set discount factor γ=1.

Goal : choose an action that maximizes the (discounted) future reward

 

Q-Learning

Define Q = maximum discounted future reward when we perform action *a* in state *s*

q-learning-example.png

the best possible score at the end of the game after performing action a in state s

 

whether you should take action a or b ? pick the action with the highest Q-value!

q-learning-algorithm.png

 

Bellman equation

bellman-equation-example

Q-value of state s and action a in terms of the Q-value of the next state s’.

Q-learning-algorithm-example.png

α in the algorithm is a learning rate that controls how much of the difference between previous Q-value and newly proposed Q-value is taken into account. In particular, when α=1, then two Q[s,a] cancel and the update is exactly the same as the Bellman equation.

The max Q[s’,a’] that we use to update Q[s,a] is only an approximation and in early stages of learning it may be completely wrong. However the approximation get more and more accurate with every iteration and it has been shown, that if we perform this update enough times, then the Q-function will converge and represent the true Q-value.

 

Deep Q Network

state of the environment in the Breakout game can be defined by the location of the paddle, location and direction of the ball and the presence or absence of each individual brick. This intuitive representation however is game specific. Could we come up with something more universal, that would be suitable for all the games?  screen pixels – they implicitly contain all of the relevant information about the game situation, except for the speed and direction of the ball.

Two consecutive screens would have these covered as well.

Take the four last screen images, resize them to 84×84 and convert to grayscale with 256 gray levels – we would have 256^(84x84x4) ≈ 1067970 possible game states. This means 1,067,970 rows in our imaginary Q-table. — too much

Deep learning steps in. Neural networks are exceptionally good at coming up with good features for highly structured data. We could represent our Q-function with a neural network, that takes the state (four game screens) and action as input and outputs the corresponding Q-value.

deep-q-network-example

Alternatively we could take only game screens as input and output the Q-value for each possible action. This approach has the advantage, that if we want to perform a Q-value update or pick the action with the highest Q-value, we only have to do one forward pass through the network and have all Q-values for all actions available immediately.

deep-convolutional-neural-networks.png

This is a classical convolutional neural network with three convolutional layers, followed by two fully connected layers. People familiar with object recognition networks may notice that there are no pooling layers. But if you really think about it, pooling layers buy you translation invariance – the network becomes insensitive to the location of an object in the image. That makes perfectly sense for a classification task like ImageNet, but for games the location of the ball is crucial in determining the potential reward and we wouldn’t want to discard this information!

 

 

ref : https://github.com/yenchenlin/DeepLearningFlappyBird,
https://github.com/asrivat1/DeepLearningVideoGames,
https://en.wikipedia.org/wiki/Convolutional_neural_network,

https://ai.intel.com/demystifying-deep-reinforcement-learning/,

deep learning library Keras : http://keras-rl.readthedocs.io/en/latest/agents/dqn/

[ keras ] xor example on ubuntu 14.04

 

0) keras prerequisite : install lapack, blas, and gfortran

sudo apt-get install liblapack-dev libblas-dev  gfortran

ref : https://dsin.blogspot.com/2009/07/ubuntu-lvm-opensource-installation.html

NOTE : gfortran is to prevent the following error

error: library dfftpack has Fortran sources but no Fortran compiler found

 

1) Installation keras

sudo pip install keras

 

2) Installing tensorflow

sudo apt-get install python-virtualenv

virtual environment

virtualenv --system-site-packages testenv

 

source testenv/bin/activate

 

make sure pip >= 8.1 is installed

upgrade pip **inside** virtual using the following command ( upgrade pip outside the virtual environment is not recommended , since a lot of ubuntu use it )

sudo pip install --upgrade pip

 

pip install --upgrade tensorflow

 

3) Make sure the keras configuration use tensorflow

~/.keras/keras.json

 {
 "epsilon": 1e-07,
 "floatx": "float32",
 "image_data_format": "channels_last",
 "backend": "tensorflow"
}

 

4) Test run keras

$ python
>>> import keras
Using TensorFlow backend

 

xor.py

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np

X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[1],[1],[0]])

model = Sequential()
model.add(Dense(8, input_dim=2))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))

sgd = SGD(lr=0.1)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])

model.fit(X, y, batch_size=1, epochs=1000)
print(model.predict_proba(X))
"""
[[ 0.0033028 ]
 [ 0.99581173]
 [ 0.99530098]
 [ 0.00564186]]
"""

 

ref : https://python3.wannaphong.com/2017/01/keras-xor.html,

https://github.com/jfsantos/keras-tutorial,

https://keras.io/,

https://www.tensorflow.org/install/

[ webpack ] handlebar-loader

Installation

npm i handlebars-loader --save

Config

webpack.config.js

{
...
module: {
rules: [
...
{
test: /\.hbs$/,
loader: "handlebars-loader"
}
]
}
}

Usage

var bookListingTemplate = require("./book-listing.hbs");

var div = document.createElement('div');

div.innerHTML = bookListingTemplate({

username: "test",

info: "Your books are due next Tuesday",

books: [ { title: "A book", synopsis: "With a description" }, { title: "Another book", synopsis: "From a very good author" }, { title: "Book without synopsis" } ] });

 

ref : https://github.com/pcardune/handlebars-loader/blob/master/README.md,

https://github.com/pcardune/handlebars-loader/blob/master/examples/basic/app.js

[ gae ] UserWarning: There are too many files in your application for changes in all of them to be monitored. You may have to restart the development server to see some changes to your files. ‘There are too many files in your application for

Error

/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/mtime_file_watcher.py:156: UserWarning: There are too many files in your application for changes in all of them to be monitored. You may have to restart the development server to see some changes to your files.
'There are too many files in your application for '

Solution

/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/mtime_file_watcher.py

 ...

for dirpath, dirnames, filenames in os.walk(self._directory,
followlinks=True):
# start added by dsin
if '/node_modules/' in dirpath or '/lib/' in dirpath: continue
# end added by dsin

...

ref : https://johanndutoit.net/app-engine-too-many-files/

[ webpack ] sass

Installation

npm install  sass-loader node-sass css-loader --save-dev

NOTE : node-sass is required by sass-loader

 

configuration

module.exports = {
 entry: ['./assets/stylesheet/main.scss'],
 ...
module: {
 rules: [{
 test: /\.css$/,
 use: ['style-loader', 'css-loader']
 },
 {
 test: /\.scss$/,
 use: [{
 loader: 'style-loader', // inject CSS to page
 }, {
 loader: 'css-loader', // translates CSS into CommonJS modules
 }, {
 loader: 'postcss-loader', // Run post css actions
 options: {
 plugins: function () { // post css plugins, can be exported to postcss.config.js
 return [
 require('precss'),
 require('autoprefixer')
 ];
 }
 }
 }, {
 loader: 'sass-loader' // compiles Sass to CSS
 }]
 }]
 }
}

 

main file

assets/stylesheet/main.scss

@import "~bootstrap/scss/bootstrap";

 

Then compile webpack. ( more )

 

ref : https://github.com/webpack-contrib/sass-loader,

https://getbootstrap.com/docs/4.0/getting-started/webpack/