Wordpress to Nikola

Long time no blogging - and no, this is not an april-fool's joke. I just spent some time moving over from wordpress to a static blog. I wasn't able to upgrade to Wordpress 5 (it broke my site), so I was looking for updates. As I like python, Nikola (http://getnikola.com) fit my bill. Still, I want to go all static and have therefore disabled comments. If you like a post and want your opinion visible, please don't hesitate to drop me a note at the email listed at contact mentioning the post you refer to and I will include your opinion here.

Meeting Report PACC

Report on 16/4/13

Yana

Building GUI

Met problem with overwriting personal files Research on parallelized processes, ssh keys

To Do

Assess if the moodle has heart bleed vulnerability(OpenSSL)

Kamilla/Asset

No progress on iScazy

To Do

Check the vulnerability of the clouds

Georgyi

Run computational tests Review and upgrade the software for CUDA graphics

To Do

Find the suitable benchmarks(ask Magzhan)

Alexandra

Installed the HDFS, and LDAP machines in the 7522 Mount HDFS with FUSE, met problems with protocols, permissions

To Do

Start to deploy the HDFS(on the backgraound in lab 7422)

Temirlan

Has finished the map/reducer Helped with installation of HDFS/LDAP

To Do

Help to deploy the HDFS in 7422 Test map/reduce framework

Erkanat

Has a progress on OpenStack, met some errors

To Do

Solve the problems with OpenStack

Almas

Tested bandwidth between nodes in the 7422(benchmarks)

To Do

Test the capacity of the switch, run benchmarks Make a test tree script

Virtualisation, part II

So, last two weeks were pretty difficult for me, maybe because of warm spring (at freaking last!), maybe because I had studied a lot.

Despite these facts i have some results on my virtualization research.

As a completely new user of this QEMU, VT-d and all other related stuff, I had pretty unpleasant time with lots of theory ( huh, i'd prefer more hands on stuff). Bad news: Iwas VERY slow. Good news: now, if my tests will go OK i have a perfect detailed guide to all other researchers.

So, these are my results:

Got understending of all necessary and lts of unnecessary things. Set up a Windows machine which recognizes my patient ( NVIDIA Tesla k20c). However, this guy is not able to start (windows device error code 10). But this might be ok because non-simulated windows gives the same responce. So, there might be three cases  - all complely gone wrong, all is ok, and it's just visual bug(for this case i will try to test CUDA in some way), or some problems with power supply for these monsterous GPU's(tests performed on two stations, one is the pc  where I performed all the actions, the other is completely new one). Next post, i hopefully will share my way to success.

P.S. Also had some time spent on helping understanding my last scripts, and fixing couple of them =)

Yours, Georgiy Krylov.

Gpu passthrough to guest machine

Good day, readers! I want to share with you some ideas on GPU virtualisation.

This topic is pretty new for me, however, since I had experience with CUDA, and some times was called 'The gpu master' by our prof, i decided to take this as a challenge for myself

First I looked for official NVIDIA virtualization support. It is indeed provided, for GRID family of devices.

So there are three ways left :

Use KVM with RedHat (wasn’t concerned too much since it is not free software)

Using Xen product, which has some doubtful results (according to results provided on forums)

Using quemu (alternative of KVM for Ubuntu with no RedHat kit) - currently in work.

This third method is based on pci passthrough using pci stub and then passing through VFIO to a virtual

machine.

However, the third method is not a well explored way and has some (lots of) bugs and ambiguities

(at least, for some users). But there were some people whi had succeeded at passing up to 99% of

performance to virtual machine.

Yours, Georgiy Krylov.

So, these are my jan through feb activities

During the period starting from 27 Jan and lasting till 24 Feb the following jobs were performed: Got

clear understanding and aims of existing task. Prepared pcs for execution of future tasks. Learned

various articles on ssh and standard Linux shell. Implemented some scripts containing ip reports ssh

keys reports and etc – all are necessary to make lab autonomous. Learned about and worked with

files distribution and ping utilities. Solved some problems with pcs occurred as result of work and

spontaneous ones. Additionally maintenance and climate control work was performed.

Yours, Georgiy Krylov.

Introduction

Hello everyone! My name is Georgiy, origin is Almaty, Kazakhstan and I am assisting to professor Ulrich in creation and maintenance of Private Academic cloud, in Kazakhstan, Nazarbayev University. I had some experience with CUDA programming in past, not afraid of trying something new, have a pretty wierd (but lazy) mind, so maybe that's why I am here.

PACC report 1

Hello, this is my first post during Spring 2014. To start with, my first task was related to NAS (Network Attached Storage), in collaboration with Kamila we assembled 5 Synology DS413j Diskstations. Synology DS413j is a 4 bay NAS server. Each NAS has total of 16TB memory. As always we couldn't install the software using official manual, but with help of professor we finally found our NASs in the network ( more on this in Kamila's post).

After some discussions we decided to implement iSCSI (Internet Small Computer System Interface). iSCSI is an Internet Protocol for linking data storage facilities. It consists of 2 parts initiators and targets. Initiator an iSCSI client in our case it is our Gateway (main server), target an iSCSI server in our case NAS. To create an iSCSI target was very easy just by using Synology DSM. I created block level target using RAID 10 and CHAP Authentication. To create iSCSI initiator I used tutorial in Synology Wiki. There were minor changes instead of "yum install iscsi-initiator-utils" I used "sudo apt-get install open-iscsi", because our Gateway runs debian. Also because our NAS's had about 8TB of storage I used parted to create gpt partition instead of fdisk. Tutorial I followed.

We had some problems with NAS2, its fan was malfunctioning. Now we are waiting until it will be repaired.

Currently I am trying to find some way to recover the Gateway in case one of the NASes fail.

PACC Meeting April 9, 2014

Ulrich: Good news. We might be able to hire interns for summer. Nursultan might be convinced to be POC if I am not available in the summer. We might want to find newer projects, an area where he can perform more work. My wish is that he likes his team and be able to direct it well. Aleks is now with us, and will be helping the administrative aspects, and might be interested in some of your work.

TO DO: Fix gateway. Asset may want to work on this. Asset: I do not know anything about scripting. Might work with Yakanat. Ulrich: I will try in 10 minutes after this meeting.

Yakanat: Tried to fix gateway. Overstacking issue. Can I connect gateway to connect external PC? Ulrich: Someone wanted to work with VPN, Tamerlan: no success, possible password issue, will look into it.

TO DO: Get fixed IPs for these computers, Georgiy's solution may not work.

Almaz: Script I was working on, network performance. Should reinstall network scripts, run corning mechanism, which should solve graphic issues.

Alexandra: MAC Address collection may be better option than of IP addresses, Almaz will get MAC addresses from George. , discussion issues about lack of PCs with Windows.

Almaz: we should ask IT for fixed IP addresses, will follow up with them, Nursultan may provide help with this.

Alexandra: Reading more in depth about computer rack, is this a type of cluster?

Temirlan: Testing, missing working on Hadu-2. Didn't recognize I was working in another section, brought output to wrong node.

Kamila: Looked at logical volumes, could be a good solution. I hope that we get some response. NAS 2, it stopped again once returned to us. We haven't been contacted to see the progress and status. Found solution for logical volumes, almost finished. We have security key solution to restore access. I am switching to an energy saving solution.

TO DO: ask for NAS2. Get it back.

Asset: Working on SSH script, having difficulties. Cannot create auto-mount script. If we lose connection, or if something is broken, this was the purpose of this scripting. Ulrich: Should be common problem, do theoretical research/study for a solution.

Ulrich: Nursultan, what sounds promising to you from various projects? We have insufficient research in virtual networks. We still hope to get these remaining funds. I will return in October for the last days of the semester, in the last 8 weeks of the semester  I will work on the senior projects working with me.

Alexandra, Almaz: Will you be able to do work on the program? Prepare a cover letter and resume.

Ulrich: Plans to get some Raspberri Pis, try out some projects with those. On the 23rd of April we need to do a show of some interesting projects we are working on for the school administrators.

PACC project report 1

My and Asset's task for the first few weeks of the PACC research was to assemble, configure and run the Synology Diskstation DS413j. (). Initially we had 5 of them, each having 4 hard drives 4GB each.

One of the main challenges at first was to find them in the network, since the opening the link http://find.synology.com/ we got nothing at all. Then with the help of professor Norbisrath we managed to find them, here is the steps that were required:

1. In terminal write
for i in {2..254}; do ping -c1 -t1 192.168.19.$i &/dev/null&& echo &i & done
2. Then we needed to check those IPs which gave some response.
Afterwards, we installed the DSM following [](http://ukdl.synology.com/download/ds/userguide/DSM4.1/Syno_UsersGuide_NAServer_enu.pdf) .

DSM has quite a fancy user-interface, where you can easily monitor and change everything you want. In order to be always aware of the NAS statuses, I enabled email notification system. Firstly, I investigated whether our university had or made it possible to run an SMTP server, but as it soon was found - this cloudn't be accomplished. Thus, I found a solution, how to use a gmail SMTP server for this purposes. SMTP server: smtp.gmail.com STMP Port: 587

Fos testing purposes we tried creating volumes under different RAID levels, until it was decided to implement iSCSI and mount all the NASes to our server. iSCSI setup for NASes will be described in Asset's blog post later.

We also experienced some problems with NAS2, whose internal fan was often failing, thus it was decided that it should be replaced. Currently, we are waiting to get it back, so that to finish the auto-mount

ownCLoud, which we have configured during the summer PACC project allows to have only one data directory and a possibility to add local as an external storage. However, it was decided that such design is not quite convenient, and there was a suggestion to use logical volumes. NAS1 and NAS2 are planned to be used for ownCLoud storage, NAS3 and NAS4 for ownCloud backup, and NAS5 for sstcloud backup. After the arrival of the NAS2 I am going to configure logical volumes for data directory .

Besides I have also updated ownCloud to 6.0.2. I was performed in the admin tab, and afterwards you needed to change the data directory out of the web root directory, following this steps:

1) sudo cp -R /var/www/owncloud/data /home/owncloud
2) sudo chown -R www-data.www-data /home/owncloud/
3) Change (datadirectory => /var/www/owncloud/data) to (datadirectory => /home/owncloud) location by editing:
sudo nano/var/www/owncloud/data/config/config.php

I also enabled encryption of the user data, which is also protected by recovery key (allow to recover users files in case of password loss).

Currently I am searching for power saving options for windows/linux and Wake On LAN

Report (Almas) early draft

Lab 7422

January - february 2014:

Used chmod -R 777 on user/local, created vulnerability that may have been exploited by viruses and malware. Corrected by UlNo (old local folder was deleted and copy was cloned from other PC).

Installed ADT bundle on 3 PCs (Quiz Users created by Yana).

Deinstalled ADT bundle, installed Eclipse Kepler. Added Android, C/C++, PyDev extensions. Using cloning procedure to copy Eclipse on all availbale PCs (around 20).

March 2014:

Week 10.03.2014 - 17.04.2014:  upgraded around 15 PCs using cloning procedure.

Week 18.03.2014 - 25.03.2014: tasked with benchmarking. Busy with exams. Hope to start later at the week - Thursday and whole break.

Week 24.03.2014 - 30.03.2014: benchmarking task not started; from wednesday to friday checked all PCs on problems:  found out that 2 PCs (12, 18) had broken UPS (asked Yerlan to fix this, he brought new on friday); upgraded 2 PCs which didn't update earlier; checked which PCs had long network config (10,11,18 (later)); ran the cloning mechanism and  check_online 7 times, checked which PCs didn't respond (4, 5, 9,10, 11, 12, 16, 18, 19, 20). Tried to identify problem with pc number 29, concluded that problem lies in RAM (later confirmed by Yerlan). Worked with Yerlan (IT dept) on solving problems with PCs 29, 31, he concluded that replacement of RAM was necessary.

April

Week 31.03.2014- 06.04.2014: tested two PCs with netperf using simple command netperf -H IP address. Worked with Georgiy on some of the problems that were identified on week earlier: proxy refusing connections, and long network config during boot. One of the reasons: week earlier during installation of new UPS router was disconnected. Georgiy instructed me on the solution to problem with long network config. Georgiy also she the light on the reason why class_clone_frome_here was unsuccesful: I should have run host_clone_from_here.