Python cheet sheet (In Jupyter Notebook)

If you want jupyter autocorrect mode use this:

pip3 install jupyter-tabnine --user
jupyter nbextension install --py jupyter_tabnine --user
jupyter nbextension enable --py jupyter_tabnine --user
jupyter serverextension enable --py jupyter_tabnine --user

Load CSV File With NumPy
import numpy
filename = ‘xxx.csv’
raw_data = open(filename, ‘rt’)
data = numpy.loadtxt(raw_data, delimiter=”,”)
print(data.shape)

Load CSV using Pandas

import pandas
filename = ‘xxx.csv’
names = [‘preg’, ‘plas’, ‘pres’, ‘skin’, ‘test’, ‘mass’, ‘pedi’, ‘age’, ‘class’] #column heads
data = pandas.read_csv(filename, names=names)
print(data.shape)

convert a one-dimensional list of data to an array

# one dimensional example
from numpy import array
# list of data
data = [11, 22, 33, 44, 55]
# array of data
data = array(data)
print(data)
print(type(data))

convert a two-dimensional list of data to an array

# two dimensional example
from numpy import array
# list of data
data = [[11, 22],
	[33, 44],
	[55, 66]]
# array of data
data = array(data)
print(data)
print(type(data))

Array indexing

#for one dimensional:
# simple indexing
from numpy import array
# define array
data = array([11, 22, 33, 44, 55])
# index data
print(data[0])
print(data[4])
print(data[-2])
#data slicing
print(data[:])
print(data[0:1])
print(data[-2:])
11
55
44
1
[11 22 33 44 55]
[11]
[44 55]
#Two dimensional indexing 
from numpy import array
# define array
data = array([[11, 22], [33, 44], [55, 66]])
# index data
print(data[0,0])
print(data[0,])
#data sclicing
# separate data

data = array([[11, 22, 33],
	[44, 55, 66],
	[77, 88, 99]])
X, y = data[:, :-1], data[:, -1]
print (x,y)
11
[11,22]
[[1122] [44 55] [77 88]]
[33 66 99]

To train the model, split the data to train and test rows

The dataset will be divided in to two parts first set wil be used to train the model and second to test the accuracy of trained model.  slicing all columns by specifying ‘:’ in the second dimension index. The training dataset would be all rows from the beginning to the split point.

# split train and test
from numpy import array
# define array
data = array([[11, 22, 33],
		[44, 55, 66],
# split train and test
from numpy import array
# define array
data = array([[11, 22, 33],
              [44, 55, 66], 
              [77, 88, 99],
              [12, 14, 44],
              [17,15, 18], 
             [45, 23, 22]])
# separate data
split = int((len(data) * 0.8)) # 805 data is seperated for training and 20% for the testing
train,test = data[:split,:],data[split:,:]
print(train)
print(test)
split
[[11 22 33]
 [44 55 66]
 [77 88 99]
 [12 14 44]]
[[17 15 18]
 [45 23 22]]
4

Array reshaping:

After slicing, you need to reshape the data.

You can use the size of your array dimensions in the shape dimension, such as specifying parameters.

# array shape
from numpy import array
# list of data
data = [[11, 22],
	[33, 44],
	[55, 66]]
# array of data
data = array(data)
print(data.shape)
print('Rows: %d' % data.shape[0])
print('Cols: %d' % data.shape[1])
(3, 2)
Rows: 3
Cols: 2

Reshape 1D to 2D Array

Its common to need to reshape 1D array to 2D array:

the case of reshaping a one-dimensional array into a two-dimensional array with one column, the tuple would be the shape of the array as the first dimension (data.shape[0]) and 1 for the second dimension.

# define array
data = array([11, 22, 33, 44, 55])
print(data.shape)
# reshape
data = data.reshape((data.shape[0], 1))
print(data.shape)

(5,)
(5, 1)

Reshape 2D to 3D array

We can use the sizes in the shape attribute on the array to specify the number of samples (rows) and columns (time steps) and fix the number of features at 1.

data = [[11, 22],
	[33, 44],
	[55, 66]]
# array of data
data = array(data)
print(data.shape)
# reshape
data = data.reshape((data.shape[0], data.shape[1], 1))
print(data.shape)
(3, 2)
(3, 2, 1)

Have a nice day 🙂

Data visualization in Mixed reality, from data to result

The data that we usually generate from sensors are in raw form and generally in CSV format. It’s hard to visualize it just by staring it in the long list separated by a comma.
The traditional form of data visualization is in Pi-chart, graphs, and another different visual format. This is still not enough for us to find the pattern within the data and visualize the data in the pictural form. We usually lost the valuable information from the blind spot of those current data visualization.
We experienced the mixed reality in the field of data visualization with various sets of combinations. This series will go from the start to the bottom of the mixed reality data visualization.

  1. Developing the sensor and collect the data from it:

We design, assemble and develop the sensor for the air pollution data collection.

Sensor Top view

We then deployed our sensors in various parts of the university. The sensor is designed in such a way that it will collect data every 30 minutes. The sensor contains different urban pollution data like pm 2.5, pm 1.0, pm 10, and 9 more. The assembled sensor also has humidity, GPS, and temperature sensors, which also collect data simultaneously.
The collected data looks like below.

Excel file format

After cleaning, re-arranging the data, the sensor data is ready to use for the visualization.

2. developing the city model for the overlay of data in Mixed reality data visualization.

i used the same model that i prepared for the virtual reality data visualization. the process is in the link below.

Using data cubes in city model by using rhino to visualize in virtual reality.

3. making the data surface by using sensor location and sensor data.

By using the GPS location, we plot the sensor in the city model. The x and the y value are generated from the GPS location to put the exact sensor data in the sensor location.

the boarder points are decided by the mean value of the total sensor data of one period of time.
the final edited sensor values look like the figure below:

pollution data, differences, and data analysis chart

since we want to visualize the rate of changing the pollution in the particular area, we can find the pattern in which area the pollution spread fast and how the pollution spread in a specific area.

We use rhino python to plot the data in the rhino in the form of points. Then the data points are used to make the data surface by utilizing the patching option in rhino.

import Rhino
import Rhino.Geometry as rg
import rhinoscriptsyntax as rs
import csv



def readFile():
    filename = rs.OpenFileName("Open CSV file","*.csv|", None, None, None)
    file = open(filename, 'r')
 #   f = open('only differencws for data plott.csv', 'r')
    rd = file.readlines()

    ptlist = []

    for line in rd:

        x=float(line[1])
        y=float(line[2])
        z=float(line[3])
        sensor1 = float(line[4])
        scalled = sensor1 * 10000
        finalpt = z + scalled
        pt = (x,y, finalpt)
        ptlist.append(pt)
    
    filename.close()

    for point in ptlist:
        rs.AddPoint(point)


readFile()

The height of the developed surface represented the value of pollution change in the ideal time. But the data surface is very different than the site topography. So I use the color, which represented the value of the pollution change in that area. For that, I use the grasshopper and heat-map generation option. The flow chart of the grasshopper is shown below.

grasshopper flowchart

After making the heat-map, the model look like this,

Top view of the model with heat-map

We use various way to present data with model in the Mixed reality.

After making the rhino file ready, we export the fbx file format to import it into the unity.

The steps need to follow to prepare the unity file is explained here.

Setting Up in unity file to develop for magic leap glass.

After finishing the unity development, we need to make magic leap app which then install in the magic leap.

The final output look like this:

Setting Up in unity file to develop for magic leap glass.

open unity file in 2019.4.6F1 version. Although there is more recent version of unity, while working I found this good and no bugs. To install unity and magic leap sdk, desktop companion software’s, please follow the link

https://developer.magicleap.com/en-us/learn/guides/unity-setup-intro

Follow the video below to setup unity for magic leap

Cutting voronoi shape holes in the panel by heat wire cutter, by using robot arm

The main problem i faced while cutting the unparallel shape holes in the panel is how to tell robot which direction the robot tool should faced.

If the defult position is like in the top part of the above picture and required position is like in the down figure. we need the angle in x direction, y direction and z direction.

the angle with x axis is (90-x) with the yz axis. simillarly to other axis. its really hard to explain how to exactly get the required value i tried to show how to get the required data and make robot follow the path.

Heatwire robot cutter used to cut different type of colums from styrofoam.

we design and cut several column by robot arm with attached heat wire .

we use two of the column model design by Joseph Choma in this work. the workcan be found here,

the photoes below are taken during and final results.

The full process can be see in the video below.

how to find the overlap area of the brick wall, and how to find the place to apply mortar by Rhinopython.

to make the work much automatic while using robot arm construction, we need to know where the brick ware joining and where we need to apply the mortar.

By finding the place to apply mortar, we can reduce the about of mortar use, less production of waste and clean work.

here, we have a brick wall that we designed. As shown in picture below, the joint between brick is different from each brick. So we need to know where we will apply the mortar.

We used rhino python to separate the wall in alternate layer, then use Boolean intersection with that we get the surface area centroid.

import rhinoscriptsyntax as rs
import math

def getAllObjects():
    all = rs.GetObjects()
    return all
    
def divideAlternate(all):
    brickeven = []
    brickodd = []
    
    hmin = 0
    hmax = 0
    for block in all:
        center = rs.SurfaceAreaCentroid(block)
        #print('center')
       # print(center)
        h = center[0][2]
        if h < hmin :
            hmin = h
        elif h > hmax:
            hmax = h
    print(hmin)
    print(hmax)
    diff = int((hmax-hmin)/28)
    print(diff)

    for block in all:
        c = rs.SurfaceAreaCentroid(block)
        ct01 = c[0]
        height = (ct01[2])
        
        #heightOrigin = int(((height - hmin)* 10**3) / 10.0**3)
        heightOrigin = round((height-hmin),0)
        x = heightOrigin / 28

        if x % 2 == 0:
            brickeven.append(block)
        else:
            brickodd.append(block)
    

    trans = 0, 0, 0.001
    rs.MoveObject(brickeven, trans)
    surface = rs.BooleanIntersection(brickeven, brickodd, True )
    #intersect = rs.GetObject("select")
    x = []
    y = []
    for all in surface:
        surface1 = rs.ExtractSurface(all, 0)
        boarder = rs.DuplicateSurfaceBorder(surface)
        ct = rs.SurfaceAreaCentroid(boarder)
        direction = ct[0]
        distance = 5
        curve = rs.OffsetCurve(boarder, direction, distance)
        pts = rs.CurvePoints(curve)
        xa = pts[0]
        print(xa)
        #print(pts)
        for pt in pts:
            rs.AddPoint(pt)
'''
        cen = rs.SurfaceAreaCentroid(all)
        xaxis = round((cen[0][0]), 3)
        x.append(xaxis)
        yaxis = round((cen[0][1]), 3)
        y.append(yaxis)
        zaxis = round((cen[0][2]), 3)
        rs.AddPoint(xaxis,yaxis,zaxis)
'''
def main():
    all = getAllObjects()
    divideAlternate(all)
    #makeGluePoints(all)
    
main()
    

Synchronization of work between two robot arms.

we use RoboDk and python api to develop the code which will divide the work between two robots .

as the input in robot 1 is handelled by robot 2 and vice versa, 2 robots can be used for the same work withoud fearing of collide.

if one robot arm stop working it will stop another robot arm also and prevent any damage.

the video is the final output of the recent practice we did for brick wall construction.

How researcher make robot autonomous for construction field:

  1. The OBIKO company is mainly working for the robot automation

Their robot is capable of accurate 3d scanning of its environment and work on that. There is not technical detail of robot however when I watch the video, at 0:57 it shows the computer screen, it seems like they use unity and 3d scanning to create the path of the robot.

The robot seems to have moving automatically and creating the path to paint. It can be seen at time 0:15.

2. Another successful construction robot is Sam-100:

This robot is semi-automated mason. It grabs the brick, spread the mortar and lay the brick. It uses laser guided system to lay the brick. In the video below the robot is laying brick wall without the human help.

the working demonstration of Sam 100.

while working, it collects the data, errors and performance rate which they use to make better future work.

3. Spatial Timber Assemblies project by ETH Zurich:

The Spatial Timber Assemblies project by ETH Zurich is another successful project complete by autonomous robotic fabrication. As the video below,

the robot does almost all work to fabricate the truss similar to digital plan. In above video, at time from 0:58, we can understand that they use rhino and grasshopper along with python code to make the sequences. There is no more information on how they built and about other process.

Design a site like this with WordPress.com
Get started