Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Questions about operator overloading

I have two questions about operator overloading.

  1. For an iterator type, how is operator-> overloaded? What value should it return assuming that it is an iterator for a collection of class T objects?

  2. Why does operator++() return by class T& while operator++(int) return by class T? I understand these two represent prefix increment and postfix increment. But why the difference in return value?

EDIT: For Alf. Code is not complete though functioning. Any suggestions for improvement are welcome.

#ifndef DHASH_H
#define DHASH_H

//#include <vector>
#include <memory>
#include <exception>
#include <new>
#include <algorithm>
#include <functional>

namespace MCol
{
    template <typename KEY, typename VALUE, typename HASH_FUNCTION, typename KEY_COMP = std::equal_to<KEY> >
        class hash_container
        {
            private:
                struct entry
                {
                    KEY _k;
                    VALUE _v;

                    entry(const KEY& k, const VALUE& v)
                        :_k(k), _v(v)
                    {}

                    entry& operator=(const entry& e)
                    {
                        this->_k = e._k;
                        this->_v = e._v;
                    }
                };

            private:
                struct bucket
                {
                    entry* a_Entries;
                    size_t sz_EntryCount;   

                    bucket()
                    {
                        sz_EntryCount = 0;
                        a_Entries = NULL;
                    }

                    ~bucket()
                    {
                        for(size_t szI = 0; szI < sz_EntryCount; ++szI)
                        {
                            a_Entries[szI].~entry();
                        }
                        free(a_Entries);
                    }

                    //Grow by 1. (Perhaps later try block increment. But wikipedia suggests that there is little difference between the two)
                    inline bool insert(const KEY& k, const VALUE& v) throw (std::bad_alloc)
                    {
                        if(find(k) != NULL)
                        {
                            return false;
                        }
                        a_Entries = static_cast<entry*>(realloc(a_Entries, sizeof(entry)*(++sz_EntryCount)));
                        if(a_Entries == NULL)
                        {
                            throw std::bad_alloc();
                        }

                        new (&a_Entries[sz_EntryCount - 1]) entry(k, v);
                        return true;
                    }

                    //Find entry, swap with last valid entry, remove if necessary.
                    inline bool erase(const KEY& k) throw(std::bad_alloc)
                    {
                        //Forwards or backwards? My guess is backwards is better.
                        entry* pE = a_Entries;
                        while(pE != a_Entries + sz_EntryCount)
                        {
                            if(pE->_k == k)
                            {
                                break;
                            }
                            ++pE;
                        }

                        if(pE == a_Entries + sz_EntryCount)
                        {
                            return false;
                        }

                        //We don't need to swap if the entry is the only one in the bucket or if it is the last one.
                        entry* pLast = a_Entries + sz_EntryCount - 1;
                        if((sz_EntryCount > 1) && (pE != pLast))
                        {
                            pE = pLast;
                        }

                        a_Entries = static_cast<entry*>(realloc(a_Entries, sizeof(entry)*(--sz_EntryCount)));
                        if(a_Entries == NULL && sz_EntryCount > 0)
                        {
                            throw std::bad_alloc();
                        }

                        return true;
                    }

                    inline entry* find(const KEY& k) throw()
                    {
                        //Better implement a search policy.
                        entry* pE = a_Entries;
                        while(pE != a_Entries + sz_EntryCount)
                        {
                            if(pE->_k == k)
                            {
                                break;
                            }
                            ++pE;
                        }

                        if(pE == a_Entries + sz_EntryCount)
                        {
                            return NULL;
                        }

                        return pE;
                    }
                };

                HASH_FUNCTION& _hf;
                KEY_COMP _kc;

                size_t sz_TableSize;
                double d_MultFactor;                                            //Recalculate this as 1/sz_TableSize everytime sz_TableSize changes.
                size_t sz_NextResizeLimit;
                size_t sz_EntryCount;
                double d_ExpectedLoadFactor;
                double d_CurrentLoadFactor;

                //If the load factor is relatively high (say >0.5 assuming sizeof(entry) == 2*sizeof(size_t)), it is more space efficient to keep a straight bucket array. But if the load factor is low, memory consumption would be lower if a pointer array of Entries is used here. But, because we would not be much concerned with a little additional memory being used when there are few entries, I think array of bucket objects is better. Further, it bypasses a pointer lookup. May have to reconsider is a situation where multiple hash tables are used (Perhaps as an array).
                bucket* a_Buckets;


                hash_container(const hash_container&);
                hash_container& operator=(const hash_container&);

                inline void calculateMultFactor() throw()
                {
                    d_MultFactor = 1.0f/static_cast<double>(sz_TableSize + 1);
                    //sz_NextResizeLimit = static_cast<size_t>(d_ExpectedLoadFactor*sz_TableSize);
                    //Have a look at this.
                    //TODO
                }

                void resize(size_t szNewSize) throw(std::bad_alloc)
                {
                    if(szNewSize == 0)
                    {
                        szNewSize = 1;
                    }
                    size_t szOldSize = sz_TableSize;
                    for(size_t szI = szNewSize; szI < szOldSize; ++szI)
                    {
                        a_Buckets[szI].~bucket();
                    }

                    a_Buckets = static_cast<bucket*>(realloc(a_Buckets, sizeof(bucket)*szNewSize));
                    if(a_Buckets == NULL)
                    {
                        throw std::bad_alloc();
                    }
                    //Unnecessary at the moment. But, just in case that bucket changes.
                    for(size_t szI = szOldSize; szI < szNewSize; ++szI)
                    {
                         new (&a_Buckets[szI]) bucket();
                    }

                    sz_TableSize = szNewSize;
                    calculateMultFactor();
                }

                inline bucket* get_bucket(const KEY& k) throw()
                {
                    return a_Buckets + _hf(k, sz_TableSize);
                }

                inline bool need_resizing() const throw()
                {

                }
            public:
                //typedef iterator void*;
                //typedef const_iterator void*;

                //iterator Insert(KEY& k, VALUE& v);
                //VALUE& Find(Key& k);
                //const VALUE& Find(Key& k);
                //iterator Find(KEY k);
                //const_iterator Find(KEY k);
                //void Delete(KEY& k);
                //void Delete(iterator it);
                //void Delete(const_iterator it);
                class iterator
                {
                    private:
                        entry* p_Entry;
                        bucket* p_Bucket;

                        friend class bucket;

                    public:
                        iterator(entry* pEntry)
                            :p_Entry(pEntry)
                        {
                        }

                        iterator()
                        {
                            p_Entry = NULL;
                        }

                        iterator(const iterator& it)
                        {
                            this->p_Entry = it.p_Entry;
                        }

                        inline VALUE& operator*() const
                        {
                            return p_Entry->_v;
                        }

                        inline bool operator==(const iterator& it) const
                        {
                            return this->p_Entry == it.p_Entry;
                        }

                        inline bool operator!=(const iterator& it) const
                        {
                            return !(*this == it);
                        }

                        inline iterator& operator=(const iterator& it)
                        {
                            this->p_Entry = it.p_Entry;
                        }

                        inline VALUE* operator->() const
                        {
                            return &p_Entry->_v;
                        }

                        inline iterator operator++()
                        {
                            return *this;
                        }

                        inline iterator& operator++(int)
                        {
                            //WRONG!!!
                            //TODO : Change this.
                            return *this;
                        }
                };

            private:
                iterator _EndIt;

            public:
                hash_container(HASH_FUNCTION& hf, size_t szTableSize = 1024, double dLoadFactor = 0.7f, KEY_COMP kc = KEY_COMP())throw(std::bad_alloc)
                    :_hf(hf), sz_TableSize(szTableSize), d_ExpectedLoadFactor(dLoadFactor), _kc(kc)
                {
                    if(d_ExpectedLoadFactor < 0.1f)
                    {
                        d_ExpectedLoadFactor = 0.1f;
                    }

                    a_Buckets = NULL;
                    sz_TableSize = 0;
                    if(szTableSize == 0)
                    {
                        szTableSize = 1;
                    }
                    resize(szTableSize);
                    d_CurrentLoadFactor = 0.0f;
                    sz_EntryCount = 0;

                    _EndIt = iterator(NULL);
                }

                virtual ~hash_container()
                {
                    for(size_t szI = 0; szI < sz_TableSize; ++szI)
                    {
                        a_Buckets[szI].~bucket();
                    }
                }

                inline iterator find(const KEY& k) throw()
                {
                    bucket* pBucket = get_bucket(k);
                    return pBucket->find(k);
                }

                inline bool insert(const KEY& k, const VALUE& v) throw(std::bad_alloc)
                {
                    bucket* pBucket = get_bucket(k);
                    bool bRet = false;
                    try
                    {
                        bRet = pBucket->insert(k, v);
                    }
                    catch(std::bad_alloc& e)
                    {
                        //What now?
                        throw e;
                    }
                    if(bRet == true)
                    {
                        ++sz_EntryCount;
                    }
                    return bRet;
                }

                inline VALUE& operator[](const KEY& k) throw(std::bad_alloc)
                {
                    bucket* pBucket = get_bucket(k);

                }

                inline bool erase(const KEY& k) throw(std::bad_alloc)
                {
                    bucket* pBucket =  get_bucket(k);
                    bool bRet = false;
                    try
                    {
                        bRet = pBucket->erase(k);
                    }
                    catch(std::bad_alloc& e)
                    {
                        throw e;
                    }
                    if(bRet == true)
                    {
                        --sz_EntryCount;
                    }
                    return bRet;
                }

                inline iterator end() const
                {
                    return _EndIt;
                }

                inline size_t size() const
                {
                    return sz_EntryCount;
                }

                inline size_t table_size() const
                {
                    return sz_TableSize;
                }

                inline double current_load_factor() const
                {
                    return d_MultFactor*static_cast<double>(sz_EntryCount);
                }

                inline double expected_load_factor() const
                {
                    return d_ExpectedLoadFactor;
                }
        };
}

#endif
like image 917
nakiya Avatar asked Jan 19 '11 03:01

nakiya


People also ask

What are the limitations of operator overloading?

1) Only built-in operators can be overloaded. New operators can not be created. 2) Arity of the operators cannot be changed. 3) Precedence and associativity of the operators cannot be changed.

What is mandatory for operator overloading?

What are the rules for operator overloading in C++? To work, at least one of the operands must be a user-defined class object. We can only overload the existing operators, Can't overload new operators. Some operators cannot be overloaded using a friend function.

What is the purpose of using operator overloading?

The purpose of operator overloading is to provide a special meaning of an operator for a user-defined data type. With the help of operator overloading, you can redefine the majority of the C++ operators. You can also use operator overloading to perform different operations using one operator.

Which statement correctly explains operator overloading?

9. Which is the correct statement about operator overloading? Explanation: Both arithmetic and non-arithmetic operators can be overloaded. The precedence and associativity of operators remains the same after and before operator overloading.


1 Answers

.1. operator-> should almost always return a pointer type. When acting as an iterator with value_type T, it should return T*.

In some rarer cases, operator-> may return a different class type, which also has an operator-> member function.

.2. There are no technical requirements on what either form of operator++ must return, but the usual conventions make them act most like the built-in meanings.

class T {
public:
    // pre-increment
    T& operator++() { increment_me(); return *this; }
    // post-increment
    T operator++(int) { T copy(*this); increment_me(); return copy; }
    //...
};

The built-in meaning of the pre-increment expression ++x first increments the number and then returns an lvalue to the incremented number. A return type of T& acts similarly.

The built-in meaning of the post-increment expression 'x++' increments the variable but returns an rvalue copy of the variable's previous value. So most user-defined overloads return a copy of the original value (which can practically never be a reference).

like image 154
aschepler Avatar answered Oct 20 '22 16:10

aschepler