Wednesday, May 22, 2013

Debug: undefined reference to 'vtable for XXXClass'

undefined reference to 'vtable for XXXClass'

If a class defines virtual methods outside that class, then g++ generates the vtable only in the object file that contains the outside-of-class definition of the virtual method that was declared first:
//test.h
struct str
{
   virtual void f();
   virtual void g();
};

//test1.cpp
#include "test.h"
void str::f(){}

//test2.cpp
#include "test.h"
void str::g(){}
The vtable will be in test1.o, but not in test2.o
This is an optimisation g++ implements to avoid having to compile in-class-defined virtual methods that would get pulled in by the vtable.
The link error you describe suggests that the definition of a virtual method (str::f in the example above) is missing in your project.

Wednesday, May 15, 2013

Problem solved: Why doesn't Chrome remember its window size settings?

Position of the window is only saved if the state (maximized/minimized/restored) is changed.
So, move the window to where you want it, then size it a little (this can be done before the move), and then shut down Chrome and restart. This is still really a bug, but at least there is an easy work around. 
Note: in fact, needn't to restart the Chrome.

Emacs on windows 7, setting up color themes


  1. Create this folderC:\Users\your_user_name\AppData\Roaming\.emacs.d\color-theme-6.6.0 (change or remove the version number as you like)
  2. Go to color-theme homepage and get the latest.  I got color-theme-6.6.0.zip
  3. Unzip and dump the file color-theme.el and the folder themes into the folder you created in step 1
  4. As per the references for ColorTheme and emacs Load Path, add the following to your.emacs (you may have to strip out any previous color theme settings yourself):
         (add-to-list 'load-path "~/.emacs.d/color-theme-6.6.0") 
         (require 'color-theme)
         (setq color-theme-is-global t)
         (color-theme-initialize)
         ;; A nice dark color theme
         (color-theme-lawrence)

Wednesday, May 8, 2013

Virtual destructors for base class


Always forget to do this....mark!

Virtual destructors are useful when you can delete an instance of a derived class through a pointer to base class:
class Base 
{
    // some virtual methods
};

class Derived : public Base
{
    ~Derived()
    {
        // Do some important cleanup
    }
}
Here, you'll notice that I didn't declare Base's destructor to be virtual. Now, let's have a look at the following snippet:
Base *b = new Derived();
// use b
delete b; // Here's the problem!
Since Base's destructor is not virtual and b is a Base* pointing to a Derived object, delete bhas undefined behaviour. In most implementations, the call to the destructor will be resolved like any non-virtual code, meaning that the destructor of the base class will be called but not the one of the derived class, resulting in resources leak.
To sum up, always make base classes' destructors virtual when they're meant to be manipulated polymorphically.
If you want to prevent the deletion of an instance through a base class pointer, you can make the base class destuctor protected and nonvirtual; by doing so, the compiler won't let you call delete on a base class pointer.
You can learn more about virtuality and virtual base class destructor in this article from Herb Sutter.

A virtual constructor is not possible but virtual destructor is possible. Let us experiment....
#include <iostream>
using namespace std;
class base
{

public:
    base(){cout<<"Base Constructor Called\n";}
    ~base(){cout<<"Base Destructor called\n";}

};
class derived1:public base
{

public:
    derived1(){cout<<"Derived constructor called\n";}
    ~derived1(){cout<<"Derived destructor called\n";}

};
int main()
{

    base* b;
    b=new derived1;
    delete b;

}
The above code output the following....
Base Constructor Called
Derived constructor called
Base Destructor called
The construction of derived object follow the construction rule but when we delete the "b" pointer(base pointer) we have found that only the base destructor is call.But this must not be happened. To do the appropriate thing we have to make the base destructor virtual. Now let see what happen in the following ...
#include <iostream>
using namespace std;
class base
{

public:
    base(){cout<<"Base Constructor Called\n";}
    virtual ~base(){cout<<"Base Destructor called\n";}

};
class derived1:public base
{

public:
    derived1(){cout<<"Derived constructor called\n";}
    ~derived1(){cout<<"Derived destructor called\n";}

};
int main()
{

    base* b;
    b=new derived1;
    delete b;

}
the output changed as following
Base Constructor Called
Derived constructor called
Derived destructor called
Base Destructor called
So the destruction of base pointer(which take an allocation on derived object!) follow the destruction rule i.e first the derived then the base. On the other hand for constructor there are nothing like virtual constructor. Thanks (Write Code and have fun!!!)

Monday, April 22, 2013

Emacs copy rectangle


1. select the rectangle
2. M-w (Esc w), copy
3. C-x r r (copy the register) -> Enter
4. go to the other buffer
5. C-x r i  (insert the register) -> Enter


Friday, March 29, 2013

Batch rename files in Bash


$ rename s/"SEARCH"/"REPLACE"/g *
This will replace the string SEARCH with REPLACE in every file (that is, *). The /g means global, so if you had a "SEARCH SEARCH.jpg", it would be renamed "REPLACE REPLACE.jpg". If you didn't have /g, it would have only done substitution once, and thus now named "REPLACE SEARCH.jpg". If you want case insensitive, add /i (that would be, /gi or /ig at the end).
With regular expressions, you can do lots more. For example, if you want to append something to every file: $ rename s/'^'/'MyPrefix'/ * That would add MyPrefix to the beginning of every filename. You can also do ending: $ rename s/'$'/'MySuffix'/ *
Also, the -n option will just show what would be renamed, then exit. This is useful, because you can make sure you have your command right before messing all your filenames up. :)

Thursday, March 28, 2013

How does one write code that best utilizes the CPU cache to improve performance?


The cache is there to reduce the number of times the CPU would stall waiting for a memory request to be fulfilled (avoiding the memory latency), and as a second effect, possibly to reduce the overall amount of data that needs to be transfered (preserving memory bandwidth).
Techniques for avoiding suffering from memory fetch latency is typically the first thing to consider, and sometimes helps a long way. The limited memory bandwidth is also a limiting factor, particularly for multicores and multithreaded applications where many threads wants to use the memory bus. A different set of techniques help addressing the latter issue.
Improving spatial locality means that you ensure that each cache line is used in full once it has been mapped to a cache. When we have looked at various standard benchmarks, we have seen that a surprising large fraction of those fail to use 100% of the fetched cache lines before the cache lines are evicted.
Improving cache line utilization helps in three respects:
  • It tends to fit more useful data in the cache, essentially increasing the effective cache size.
  • It tends to fit more useful data in the same cache line, increasing the likelyhood that requested data can be found in the cache.
  • It reduces the memory bandwidth requirements, as there will be fewer fetches.
Common techniques are:
  • Use smaller data types
  • Organize your data to avoid alignment holes (sorting your struct members by decreasing size is one way)
  • Beware of the standard dynamic memory allocator, which may introduce holes and spread your data around in memory as it warms up.
  • Make sure all adjacent data is actually used in the hot loops. Otherwise, consider breaking up data structures into hot and cold components, so that the hot loops use hot data.
  • avoid algorithms and datastructures that exhibit irregular access patterns, and favor linear datastructures.
We should also note that there are other ways to hide memory latency than using caches.
Modern CPU:s often have one or more hardware prefetchers. They train on the misses in a cache and try to spot regularities. For instance, after a few misses to subsequent cache lines, the hw prefetcher will start fetching cache lines into the cache, anticipating the application's needs. If you have a regular access pattern, the hardware prefetcher is usually doing a very good job. And if your program doesn't display regular access patterns, you may improve things by adding prefetch instructions yourself.
Regrouping instructions in such a way that those that always miss in the cache occur close to each other, the CPU can sometimes overlap these fetches so that the application only sustain one latency hit (Memory level parallelism).
To reduce the overall memory bus pressure, you have to start addressing what is called temporal locality. This means that you have to reuse data while it still hasn't been evicted from the cache.
Merging loops that touch the same data (loop fusion), and employing rewriting techniques known as tilingor blocking all strive to avoid those extra memory fetches.
While there are some rules of thumb for this rewrite exercise, you typically have to carefully consider loop carried data dependencies, to ensure that you don't affect the semantics of the program.
These things are what really pays off in the multicore world, where you typically wont see much of throughput improvements after adding the second thread.
share|edit
+1, this is excellent, thank you. – Antony Vennard Dec 29 '10 at 22:10

I recommend reading the 9-part article What every programmer should know about memory by Ulrich Drepper if you're interested in how memory and software interact. It's also available as a 104-page PDF.
Sections especially relevant to this question might be Part 2 (CPU caches) and Part 5 (What programmers can do - cache optimization).
share|edit