I have a class hierarchy of elements (more static) with differentoperations on them in another class hierarchy (more flexible). ... Idecided to use the Double Dispatch technique:
elem.do(operationObj) { operationObj.doForA(this); }
Based on this, I'd say you've implemented the Visitor pattern exactly, and furthermore it's a well chosen design precisely because the operations hierarchy is supposed to be more flexible.
(Whether the pattern is immediately recognizable as such to other developers is a different question, and perhaps a less important one, as the Visitor pattern is one of the least understood patterns by the general developer community, so having names such as accept
and visit
is of limited utility. The elem.do(operation)
scheme you've come up with probably communicates the intent better, and in fact I've used similar naming in this answer of mine (except that I renamed visit
to applyTo
). It provides a bit more in-depth treatment of what I'm about to say here, so check that out as well.
Besides being a double-dispatch mechanism, in it's core, the Visitor pattern allows you to treat a group of concrete types (your elements) as a single abstract type - so that your client code can be written in a way that avoids type checking. However, there's a catch. I said the design is well chosen because the Visitor pattern implies a certain tradeoff - it is hard to add new kinds of elements (new subtypes in the element type hierarchy), but it is easy to add new operations. The reason being that the dispatching mechanism is tightly coupled to the number of different element types, because in general you want to enforce that all operations work on all elements (even if sometimes it's a no-op, if a no-op makes sense in a given context) - because your client code should remain oblivious to the concrete type, and free to pass any operation to the do
method of any element.
So, these sorts of designs are suitable when the number of different element types is expected to be relatively stable, while the number of different operations is to change more frequently (i.e., there's more flexibility on the operations side). From your description, it seems that this is exactly the case.
Note that this tradeoff is the opposite to that found in "normal" OOP dynamic polymorphism. There, you have an abstract type that defines a fixed interface (a limited set of abstract operations), that a number of derivatives has to implement in different ways (but within the constraints set by the abstraction) - so it's easy to add new derivatives (new types of elements), but it's hard to add new abstract operations (because a change to the interface propagates throughout the hierarchy).
BTW, this tradeoff is not something that's specific to the Visitor pattern, it's just that the pattern is a manifestation of one of the two basic approaches to data abstraction (again, see my other answer).
The only thing missing is the traversing part
It's true that in the Go4 book the Visitor pattern is introduced in the context of traversing an object structure, and that a common example is traversing something like an abstract syntax tree, or a scene graph. But it is not strictly necessary to have an obvious tree (or graph) structure to traverse. There may be no traversal at all, you might just want to have the sort of abstraction described above, precisely because you need the flexibility on the operations side, and because you may want the compiler to enforce that all element-specific variants of an operation are implemented.
That said, traversal might arise in a different way. Because you now have an abstract element type, and a mechanism to dispatch operations on it, you can create composite recursive types (where you have an element that contains one or more abstract elements (that may contain their own elements, and so on), so there's a tree internal to it, so when you apply the operation, you're traversing this internal tree. This sort of thing starts to resemble the algebraic data types you find in functional programming - were something like a List type might be represented in a way that's equivalent to an abstract List class at the root of a type hierarchy consisting of a concrete Empty type, and a concrete NonEmpty type that contains an item (list element) plus a "tail" that's an abstract List (kind of a linked-list structure, when you think about it). However, in object-oriented languages, this particular approach to design tends to be inefficient and clunky to work with (among other things, implementing operations properly on such recursive types might require a bit of a mindset shift, so there's a learning curve, and it gets even worse if you need type parameters/generics), but there might be a more constrained application that could work well for you.
Is my approach an overkill because without the need for traversal, there is an easier solution?
It might be an overkill, but that's something for you to decide, since among us here, you're the one who knows your problem domain the best. I don't think the need for traversal (or lack thereof) makes that much of a difference, though, you can do it with any of the options below.
The simplest alternative (that keeps the same extensibility properties/tradeoffs) is to just check the type within your operations (a bunch of ifs or a switch statement). With this approach, an operation may or may not be an object - it could just be a free function that accepts an instance of the abstract element type. On the one hand, it's conceptually simple and efficient enough, but on the other it is error prone and a bit awkward for the operation implementers. Same thing - relatively easy to add new operations, hard to add new element types. Again, you'd be writing client code in by calling operations on abstract elements, so the type checking is confined to the operation internals.
Depending on the language, you might be able to utilize pattern matching facilities, if they are available, as well as destructuring (these are features that some mainstream OO languages adopted relatively recently from the functional world). The hope here is that the type system is robust enough so that you at least get a compiler warning if you don't cover all the cases. It's not drastically different from the option above, but here, you're relying on these language features to do things in a more structured, systematic way.
With the Visitor pattern, at least in a statically typed language, you're sort of leveraging the help of the compiler by defining the set of abstract "visit" methods that must be overridden in derived operations (visitors).
There's also a somewhat unorthodox variant of the pattern where instead of having a visitor hierarchy, you make the do
method (the accept
method) take in several lambdas, one for each concrete element type, and then you only call the appropriate lambda - it's kind of like "poor man's pattern matching", and again, can get clunky, so YMMV with this.
As for the traversal itself (assuming it's not just a simple iteration over a list), you might place the traversal code in the elements, or in the operations - one thing Go4 points out is that in the latter case you'd be duplicating the traversal code for each concrete element type that's a composite, but that you might want to go down this route if the traversal process is somehow dependent on the details/results of the operations on the elements.
P.S. Finally, I'd also like to draw your attention to Erik Eidt's answer, since it mentions the so called "data oriented" programming that takes a very different approach, so might be worth considering before you decide to commit to this one.