Monthly Archives: March 2013

Auto Storage Class Specifier (C++11)

The auto key word lets you explicitly declare a variable with automatic type. It specifies that the type of the variable that is being declared will be automatically deduced from its initializer. For functions, specifies that the return type is a trailing return type. You can declare auto variables in block scope, in namespace scope, in init statements of for loops, etc, the type of the variable may be omitted and the keyword auto may be used instead.


 auto variable initializer

auto function -> return type      

You can only apply the auto key word to names of variables declared in a block or to names of function parameters. By default they have automatic type so useless in a data declaration.


Any auto variable can be initialized only exception are parameters. If you do not explicitly initialize an automatic object, its value is indeterminate. If you provide an initial value, the expression representing the initial value can be any valid C or C++ expression. The object is then set to that initial value each time the program block that contains the object’s definition is invoked.

Once the type of the initializer has been determined, the compiler determines the type that will replace the keyword auto as if using the rules for template argument deduction from a function call. The keyword auto may be accompanied by modifies, such as const or &, which will participate in the type deduction. For example, given const auto& i = expr;, the type of i is exactly the type of the argument u in an imaginary template template<class U> void f(const U& u) if the function call f(expr) was compiled. During a function declaration, the keyword auto does not perform automatic type detection. It only serves as a part of the trailing return type syntax.

 Life time

Variables with the auto storage class specifier has a local scope by default. A variable x that has automatic storage has a local scope i.e. block scope, in namespace scope, in init statements of for loops.  Each time a code block is entered, storage for auto objects defined in that block is made available. When the block is exited, the objects/variables are no longer available for use. static key word needs to be applied to have a static scope for auto variable.  If an auto object is defined within a function that is recursively invoked, memory is allocated for the object at each invocation of the block. Unless it is declared as static.


#include <iostream>

#include <cmath>

#include <typeinfo>

template<class T, class U>

auto add(T t, U u) -> decltype(t + u) // the return type of add is the type of operator+(T,U)


return t + u;


auto get_fun(int arg)->double(*)(double) // same as double (*get_fun(int))(double)


switch (arg) {

case 1: return std::fabs;

case 2: return std::sin;

default: return std::cos;



int main()


auto a = 1 + 2;

std::cout << “type of a: ” << typeid(a).name() << ‘\n’;

auto b = add(1, 1.2);

std::cout << “type of b: ” << typeid(b).name() << ‘\n’;

//auto int c; //compile-time error

auto d = {1, 2};

std::cout << “type of d: ” << typeid(d).name() << ‘\n’;



type of a: int
type of b: double
type of d: std::initializer_list<int>



Example of Abstract class.

In C++, we can make a class abstract by making its function pure virtual. Conversely, a class with no pure virtual function is a concrete class, which object can be instantiated.

A pure virtual represents an abstract behavior and may have not implementation for example draw method in Shape class represent abstract behavior as Shape class itself doesn’t have its existence in real world so there is no question of drawing it however its derived concrete classes like Line, Circle and Triangle does have physical existence and overridden draw method in these classes will have implementation. An example of calculating area is as below.


//example of abstract class.
//Calculate Area of a shape.

#include <iostream>

using namespace std;

//Abstract Base Class.
class Area

virtual double getArea() = 0; //pure virtual function

void setArea (double d1, double d2)
dimention_1 = d1;
dimention_2 = d2;

void getDim (double &d1, double &d2)
d1 = dimention_1;
d2 = dimention_2;

double dimention_1 = 0.0;
double dimention_2 = 0.0;


//Public drived class from Area.
class rectangle : public Area
double getArea()
double d1 = 0.0;
double d2 = 0.0;

getDim (d1, d2);
return (d1*d2);


//public drived class.
class triangle : public Area
double getArea()
double d1 = 0.0;
double d2 = 0.0;
getDim(d1, d2);
return (0.5 * d1 * d2);

//Demo main function
int main()
Area *areaPtr;
rectangle rect;
triangle trian;

rect.setArea(23.3, 22.2);
trian.setArea (5, 3.0);

areaPtr = &rect;
cout << “Rectangle has area: ” << areaPtr->getArea () << endl;

areaPtr = &trian;
cout << “Triangle has area: ” << areaPtr->getArea () << endl;

return 0;

Example of Bit Manipulation in C++

Access on the level of bit is the beauty and power of C++. Bit Manipulation is not a thing which may be used very frequently but if you are working with a very limited resources, it can be very handy. Below is an example showing converting integers into binary and binary to integers by manipulating bits.

#include <bitset>
#include <iostream>
#include <string>
#include <limits>
using namespace std;

int main()
// print some numbers in binary representation

cout << “267 as binary short:     ”
<< bitset<numeric_limits<unsigned short>::digits>(267)
<< endl;

cout << “267 as binary long:      ”
<< bitset<numeric_limits<unsigned long>::digits>(267)
<< endl;

cout << “10,000,000 with 24 bits: ”
<< bitset<24>(1e7) << endl;

// transform binary representation into integral number

cout << “\”1000101011\” as number:  ”
<< bitset<100>(string(“1000101011″)).to_ulong() << endl;

/* output will be as follows:
267 as binary short:     0000000100001011
267 as binary long:      00000000000000000000000100001011
10,000,000 with 24 bits: 100110001001011010000000
1000101011” as number:  555


The Date Class

I am using a low level of complexity to show the working of general C++ classes, operator overloading and data hiding/encapsulation concepts. Below is the header file of Date Class. complete implementation will follow. However, Complete code of Date class can be used freely.

Questions/comments welcomed..!

Header File: date.h

#ifndef DATE_H
#define DATE_H

#include <iostream>

class Date
short day;
short month;
short year;
short daysOfMonth(Date d);    /**returns the no of days in a month*/
static const short daysInMonth[]; /** array containing the 12 month’s days*/
bool leapYear (short);        /** tells the year is leap year or not*/

/**constructor with default arguments*/
Date(short d=1, short m = 1, short y = 1900);

void display (); /** Display the date on the screen*/
void setDate (short, short, short); /** set the date with given arguments*/
Date operator ++(); /**pre increment operator used as ++date1*/
Date operator +(short); /**plus operator used as date1 + 5 */
Date operator –(); /**pre decrement operator used as –date1 */
Date operator -(short); /**decrement operator used as date1 – 5 */
short operator – (Date); /** return number of days between two dates */
Date operator +=(short); /** add short in left hand operand and return the same. */
Date operator -=(short); /** decrement short in left hand operand and return the same. */
short operator -=(Date); /** decrement Date object in left hand operand and return the same. */
friend Date operator – (short, Date); /**decrement operator used as 5 – date1 */

/**logical comarison operators*/
bool operator < (Date);
bool operator > (Date);
bool operator <=(Date);
bool operator >= (Date);
bool operator == (Date);
bool operator != (Date);

/** Stream insertion operators to get input and output with simple, cin and cout. */
friend std::ostream& operator << (std::ostream&, Date&);
friend std::istream& operator >> (std::istream&, Date&);


#endif // DATE_H

Algorithms matter!

 Algorithms matter! Knowing which algorithm to apply under which set of  circumstances can make a big difference in the software you produce. If you don’t  believe us, just read the following story about how Gary turned failure into success with a little analysis and choosing the right algorithm for the job.’

Once  upon  a time, Gary worked at a company  with a lot of   brilliant software developers. Like most organizations  with a lot of  bright people, there were many great ideas and people to implement them in the software products. One such person was Graham, who had been with the company from its inception. Graham came up with an idea on how to find out whether a program had any memory leaks-a common problem with C and C++ programs at the time. If a program ran long enough and had memory leaks, it would crash because it would run out of   memory. Anyone who has programmed in a language that doesn’t support automatic memory management and garbage collection knows this problem well.

Graham  decided  to  build  a small  library  that  wrapped the operating system’s memory allocation and de-allocation routines, malloc()  and free(), with his own functions. Graham’s functions recorded each memory allocation and de-allocation in a data structure that could be queried when the program finished. The wrapper functions recorded the information and called the real operating system functions to perform the actual memory management. It took just a few hours for Graham to implement the solution and, voila, it worked! There was just one problem: the program ran so slowly when it was instrumented with Graham’s libraries that no one was willing to use it. We’re  talking really slow here. You could start  up a program, go have a cup of  coffee-or maybe a pot of  coffee-come  back, and the program would still be crawling along. This was clearly unacceptable.

Now Graham was really smart when it came to understanding operating systems and how their internals work. He was an excellent programmer who could write more working code in an hour than most programmers could write in a day. He had studied algorithms, data structures, and all of  the standard  topics in college, so why did the code execute so much slower with the wrappers inserted? In this case, it was a problem  of  knowing enough to make the program work, but not thinking through the details to make it work quickly. Like many creative people, Graham was already thinking about his next program and didn’t want to go back to his memory leak program to find out what was wrong. So, he asked Gary to take a look at it and see whether he could fix it. Gary was more of  a compiler and software engineering type of  guy and seemed to be pretty good at honing code to make it release-worthy.

Gary thought he’d talk to Graham about the program before he started digging into the code. That way, he might better understand how Graham structured his solution and why he chose particular implementation options.

Understand the Problem

A  good way to solve problems is to start with the big picture: understand the problem, identify potential causes, and then dig into the details. If you decide to try to solve the problem because you think you know the cause, you may solve the wrong problem, or you might not explore other-possibly better-answers. The first thing Gary did was ask Graham to describe the problem and his solution.

Graham said that he wanted to determine whether a program had any memory leaks. He  thought  the  best way to  find out  would  be to keep a record  of   all memory that  was allocated by the program, whether it was freed before the program ended, and a record of  where the allocation was requested in   the user’s program. His solution required him to build a small library with three functions:


A  wrapper around the operating system’s memory allocation function


A  wrapper around the operating system’s memory de-allocation function


A  wrapper  around  the operating system’s function  called when a program exits

This custom library would be linked with the program under test in  such a way that the customized functions would be called instead of   the operating system’s functions. The custom malloc()  and free() functions would keep track of  each allocation and de-allocation. When the program under test finished, there would be no memory leak if every allocation was subsequently de-allocated. If there were any  leaks,  the  information  kept  by  Graham’s   routines  would  allow  the programmer to find the code that caused them. When the exit() function was called, the custom library routine would display its results before actually exiting. Graham sketched out what his solution looked like.

The  description  seemed clear enough.  Unless Graham  was  doing something terribly wrong in  his code to wrap the operating system functions, it was hard to imagine that there was a performance problem in the wrapper code. If there were, then all programs would be proportionately slow. Gary asked whether there was a difference  in    the  performance  of    the  programs  Graham  had  tested.  Graham explained that the running profile seemed to be that small programs-those that did  relatively little-all ran  in acceptable  time,  regardless of   whether they had memory leaks. However, programs that did a lot of  processing and had memory leaks ran disproportionately slow.

Experiment if Necessary

Before going any further, Gary wanted to get a better understanding of  the running profile of  programs. He and Graham sat down and wrote some short programs to see how they ran with Graham’s custom library linked in. Perhaps they could get a better understanding of  the conditions that caused the problem to arise. The   first  test  program   Gary  and   Graham   wrote   (Program A),

Program A codeint  main(int argc,  char **argv) {

int i =   o;

for (i =   o; i < 1000000;  i++)


exit  (o);


They ran the program and waited for the results. It  took several minutes to finish. Although computers were slower  back then, this was clearly unacceptable. When this program  finished, there were 32  ME of   memory leaks.  How would  the program  run  if  all of   the memory allocations were  de-allocated?  They made a simple modification to create Program B,

Program B code

int main(int argc, char **argv){

int i=  0;

for (i = o; i < 1000000; i++)

void *x =  malloc(32);


exit (o);


When they compiled and ran Program B, it completed in  a few seconds. Graham was convinced that the problem was related to the number of  memory allocations open  when  the  program ended, but  couldn’t  figure out  where  the problem occurred. He had searched  through his code for several hours and was unable t o find any problems. Gary wasn’t  as convinced as Graham that the problem was the number of  memory leaks. He suggested one more experiment and made an other modification to the program,  as Program C  in which the de-allocations were grouped together at the end of  the program.

Program C code

int main(int argc, char **argv){

int i=  0;

void *addrs[1000000];

for (i = o; i < 1000000; i++){

addrs[i] =  malloc(32;)

for (i = o; i < 1000000; i++){


exit (o);


This program crawled along even slower  than  the first program! This example invalidated the theory that the number of  memory leaks affected the performance of  Graham’s program. However, the example gave Gary an insight that led to the real problem.

It wasn’t the number of  memory allocations open at the end of  the program that affected performance; it was the maximum number of  them that were open at any single time. If  memory leaks were not the only factor  affecting performance, then there had to be something about the way Graham maintained the information used to determine whether there were leaks. In  Program B, there was never more than one 32-byte chunk of  memory allocated at any point during the program’s execution.  The  first and  third  programs  had  one  million  open  allocations.

Allocating and de-allocating memory was not the issue, so the problem must be in the bookkeeping code Graham wrote to keep track of  the memory.

Gary asked Graham how he kept track of  the allocated memory. Graham replied that he was using a binary tree where each node was a structure that consisted of pointers to the children nodes (if any), the address of  the allocated memory, the size allocated, and the place in   the program where the allocation request was made. He added that he was using the memory address as the key for   the nodes since there could be no duplicates, and this decision would make it easy to insert and delete records of  allocated memory.

Using a binary tree is often more efficient than simply using an ordered linked list of  items. If an ordered list of  n items exists–and each item is equally likely to be sought-then a successful search uses, on average, about n/2 comparisons to find an item. Inserting into and deleting from an ordered list requires one to examine or move about n/2 items on average as well. Computer science textbooks would describe the performance of  these operations (search, insert, and delete) as being O(n), which roughly means that as the size of  the list doubles, the time to perform these operations also is expected to double.’

Using a binary tree can deliver O(log n) performance for these same operations, although the code may be a bit more complicated to write and maintain. That is, as the size of  the list doubles, the performance of  these operations grows only by a constant  amount. When  processing 1,000,000 items, we expect to examine an average of  20 items, compared to about 500,000 if the items were contained in a list. Using a binary tree is a great choice-if  the keys are distributed evenly in the tree. When the keys are not distributed evenly, the tree becomes distorted and loses those properties that make it a good choice for  searching.

Knowing a bit about trees and how they behave, Gary asked Graham the $64,000 (it    is   logarithmic,  after  all)    question:  “Are you  balancing  the  binary  tree?” Graham’s response was surprising, since he was a very good software developer. “No, why should I do that? It  makes the code a lot more complex.” But the fact that Graham wasn’t balancing the tree was exactly the problem causing the horrible performance of  his code. Can you figure out why? The ma lloc ( )  routine in C allocates memory (from the heap) in  order of  increasing memory addresses. Not only are these addresses not evenly distributed, the order is  exactly the one that leads to right-oriented trees, which behave more like linear lists than binary trees. To see why, consider the two binary trees The (a)   tree was created by inserting the numbers 1-15 in  order. Its root node contains the value 1 and there is a path of  14 nodes to reach the node containing the value 15. The (b) tree was created by inserting these same numbers in  the order <8, 4, 12, 2, 6, 10, 14, 1, 3, 5, 7, 9, 11, 13, 15>. In this case, the root node contains the value 8 but the paths to all  other nodes in  the tree are three nodes or less., the search time is directly affected by the length of  the maximum path.

Algorithms to the Rescue

A  balanced binary tree is a binary search tree for which the length of  all paths from the root of   the tree to any leaf node is as close to the same number  as possible. Let’s define depth( Li) to be the length of  the path from the root of  the tree to a leaf node Li· In  a perfectly balanced binary tree with n nodes, for  any two leaf  nodes,  L 1  and  L2 , the  absolute  value of   the  difference,  ldepth( L2)-depth (L1) 1:<;:1; also depth( Li):<;;log( n) for  any leaf node Li.’ Gary went to one of  his algo­rithms books and decided to modify Graham’s code so that the tree of  allocation records would be balanced by making it  a red-black binary tree. Red-black trees (Carmen et al., 2001) are an efficient implementation of  a balanced binary tree in which   given   any   two   leaf     nodes   L1   and   L2 ,  depth(LJ)!depth(L1):<;; 2 ;    also depth( ):<;;2’log2 ( n+1) for  any leaf  node L,. In  other words, a red-black tree is roughly balanced, to ensure that no path is more than twice as long as any other path.

The changes took a few hours to write and test. When he was done, Gary showed Graham  the  result.  They ran  each  of   the  three programs  shown  previously.

Program A and ProgramC took just a few milliseconds longer than ProgramB. The performance improvement reflected approximately a 5,000-fold speedup. This is what might be expected when you consider that the average number of  n odes to visit drops from 500,000 to 20. Actually, this is an order of  magnitude off: you might expect a 25,000-fold speedup, but that is offset by the computation over­ head of  balancing the tree. Still, the results are dramatic, and Graham’s memory leak detector could be released (with Gary’s modifications) in the next version of the product.

Side Story

Given the efficiency of  using red-black binary trees, is it possible that the malloc() implementation itself is coded to use them? After all, the memory allocation func­ tionality must somehow maintain the set  of  allocated regions so they can be safely deallocated. Also, note that each of  the programs listed previously make alloca­ tion requests for 32 bytes. Does the size of  the request affect the performan ce  of malloc() and free()  requests? To investigate the behavior of   malloc(), we ran a set of  experiments. First, we timed how long it  took to allocate 4,096 chunks of  n bytes, with n ranging from 1 to 2,048. Then, we timed how long it took to deallo­ cate the same memory using three strategies:


In  the order in which it was allocated; this is identical to ProgramC

freeD own

In  the reverse order in which it was allocated


In  a scattered order that ultimately frees all memory

For each value of  n we ran the experiment 100 times and discarded the best and worst performing runs. Figure 1-3 contains the average results of  the remaining 98 trials. As one might expect,  the performance of   the allocation follows a linear trend-as the size of   n increases, so does the performance, proportional to n. Surprisingly, the way in which the memory is deallocated changes the perfor­ mance. freeUp has the  best performance,  for   example,  while freeDown executes about four times as slowly.

The empirical evidence does not answer whether malloc() and free()  use binary trees (balanced or not!) to store information; without inspecting the source for free ( ) , there is no easy explanation for the different performance based upon the order in which the memory is deallocated.

The Moral of the Story

The  previous  story  really happened.  Algorithms  do  matter.  You  might  ask whether the tree-balancing algorithm was the optimal solution for   the problem. That’s a great question, and one that we’ll answer by asking another question: does it  really matter? Finding the right algorithm is like finding the right solution to any problem. Instead of  finding the perfect solution, the algorithm just has to work well enough. You must balance the cost of  the solution against the value it adds. It’s quite possible that Gary’s implementation could be improved, either by optimizing his implementation or by using a different algorithm. However , the performance of  the memory leak detection software was more than acceptable for the intended use, and any additional improvements would have been unproduc­tive overhead.

The ability to choose an acceptable algorithm for  your needs is a critical skill that any good software developer should have. You don’t  necessarily have to be able to perform detailed mathematical analysis on the algorithm, but you must be able to understand someone else’s analysis. You don’t have to invent new algorithms, but you do need to understand which algorithms fit  the problem at hand.


“Algorithms in a Nutshell by George T. Heineman, Gary Pollice, and Stanley Selkow.

Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, and Clifford St ein,

Introduction to Algorithms, Second Edition. McGraw-Hill, 2001.


It is most essential and appropriate to understand what programming really means. Let us first see a widely known definition of programming.

“A program is a precise sequence of steps to solve a particular problem.”

 It means that when we say that we have a program, it actually mean that we know about a complete set activities to be performed in a particular order. The purpose of these activities is to solve a given problem. 

Alan Perlis, a professor at Yale University, says:

“It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical? ”

It is a sarcastic statement about modern education, and it means that the modern education is not developing critical skills like planning, organizing and paying attention to detail. 

Mr. Steve Summit puts it “At its most basic level, programming a computer simply means telling it what to do, and this vapid-sounding definition is not even a joke. There are no other truly fundamental aspects of computer programming; everything else we talk about will simply be the details of a particular, usually artificial, mechanism for telling a computer what to do. 

Sometimes these mechanisms are chosen because they have been found to be convenient for programmers (people) to use; other times they have been chosen because they’re easy for the computer to understand. The first hard thing about programming is to learn, become comfortable with, and accept these artificial mechanisms, whether they make “sense” to you or not. “ 

Why Programming is important

The question most of the people ask is why should we learn to program when there are so many application software and code generators available to do the task for us. Well the answer is as given by the Matthias Felleisen in the book ‘How to design programs’ “The answer consists of two parts. First, it is indeed true that traditional forms of programming are useful for just a few people. But, programming as we the authors understand it is useful for everyone: the administrative secretary who uses spreadsheets as well as the high-tech programmer. In other words, I have a broader notion of programming in mind than the traditional one. I will explain our notion in a moment. Second, I teach our idea of programming with a technology that is based on the principle of minimal intrusion. Hence, my notion of programming teaches problem analysis and problem-solving skills without imposing the overhead of traditional programming notations and tools.”Hence learning to program is important because it develops analytical and problem solving abilities. It is a creative activity and provides us a mean to express abstract ideas.

Thus programming is fun and is much more than a vocational skill. By designing programs, we learn many skills that are important for all professions.

These skills can be summarized as: 

o Critical reading

o Analytical thinking

o Creative synthesis

  Hence while programming one should 

o Paying attention to detail

o Think about the re-usability.

o Think about user interface

o Understand the fact the computers are stupid

o Comment the code liberally

 Paying attention to detail

In programming, the details matter. This is a very important skill. A good programmer always analyzes the problem statement very carefully and in detail. One should pay attention to all the aspects of the problem. One can’t be vague. One can’t describe his/her program 3/4th of the way, then say, “You know what I mean?”, and have the compiler figure out the rest. Furthermore you should pay attention to the calculations involved in the program, its flow, and most importantly, the logic of the program. Sometimes, a grammatically correct sentence does not make any sense. For example, here is a verse from poem “Through the Looking Glass” written by Lewis Carol: 

“Twas brillig, and the slithy toves

Did gyre and gimble in the wabe “

 The grammar is correct but there is no meaning. Similarly, the sentence, “Mr. ABC sleeps thirty hours every day”, is grammatically correct but it is illogical. So it may happen that a program is grammatically correct. It compiles and runs but produces incorrect or absurd results and does not solve the problem. It is very important to pay attention to the logic of the program. 

Think about the reusability

Whenever you are writing a program, always keep in mind that it could be reused at some other time. Also, try to write in a way that it can be used to solve some other related problem. A classic example of this is:

Suppose we have to calculate the area of a given circle. We know the area of a circle is (Pi * r2). Now we have written a program which calculates the area of a circle with given radius. At some later time we are given a problem to find out the area of a ring. The area of the ring can be  calculated by subtracting the area of outer circle from the area of the inner circle. Hence we can use the program that calculates the area of a circle to calculate the area of the ring.

 Think about Good user interface

As programmers, we assume that computer users know a lot of things, this is a big mistake. So never assume that the user of your program is computer literate. Always provide an easy to understand and easy to use interface that is self-explanatory. 

Understand the fact that computers are stupid

Computers are incredibly stupid. They do exactly what you tell them to do: no more, no less– unlike human beings. Computers can’t think by themselves. In this sense, they differ from human beings. For example, if someone asks, “What is the time?”, “Time please?” or just, “Time?” we understand anyway that he is asking the time but computer is different. Instructions to the computer should be explicitly stated. Computer will tell the time only if we ask it in the way we  have programmed it. When we’re programming, it helps to be able to “think” as stupidly as the computer does, so that we are in the right frame of mind for specifying everything in minute detail, and not assuming that the right thing will happen by itself.

 Comment the code liberally

Always comment the code liberally. The comment statements do not affect the performance of the program as these are ignored by the compiler and do not take any memory in the computer. Comments are used to explain the functioning of the programs. It helps the other programmers as well as the creator of the program to understand the code. 

Program design recipe

In order to design a program effectively and properly we must have a recipe to follow. In the book name ‘How to design programs’ by Matthias Felleisen and the co-worker, the idea of design recipe has been stated very elegantly as “Learning to design programs is like learning to play soccer. A player must learn to trap a ball, to dribble with a ball, to pass, and to shoot a ball. Once the player knows those basic skills, the next goals are to learn to play a position, to play certain strategies, to choose among feasible strategies, and, on occasion, to create variations of a strategy because none fits. “ The author then continue to say that:

“A programmer is also very much like an architect, a composers, or a writer. They are creative  people who start with ideas in their heads and blank pieces of paper. They conceive of an idea, form a mental outline, and refine it on paper until their writings reflect their mental image as much as possible. As they bring their ideas to paper, they employ basic drawing, writing, and  playing music to express certain style elements of a building, to describe a person’s character, or to formulate portions of a melody. They can practice their trade because they have honed their basic skills for a long time and can use them on an instinctive level.

Programmers also form outlines, translate them into first designs, and iteratively refine them until they truly match the initial idea. Indeed, the best programmers edit and rewrite their programs  many times until they meet certain aesthetic standards. And just like soccer players, architects, composers, or writers, programmers must practice the basic skills of their trade for a long time before they can be truly creative. Design recipes are the equivalent of soccer ball handling techniques, writing techniques, arrangements, and drawing skills. “

 Hence to design a program properly, we must:

o Analyze a problem statement, typically expressed as a word problem.

o Express its essence, abstractly and with examples.

o Formulate statements and comments in a precise language.

o Evaluate and revise the activities in light of checks and tests and

o Pay attention to detail.

 All of these are activities that are useful, not only for a programmer but also for a businessman, a lawyer, a journalist, a scientist, an engineer, and many others. wish you good luck for programming…