Functional Programming Is Not Popular Because It Is Weird
by Malte Skarupke
I’ve seen people be genuinely puzzled about why functional programming is not more popular. For example I’m currently reading “Out of the Tar Pit” where after arguing for functional programming the authors say
Still, the fact remains that such arguments have been insufficient to result in widespread adoption of functional programming. We must therefore conclude that the main weakness of functional programming is the flip side of its main strength – namely that problems arise when (as is often the case) the system to be built must maintain state of some kind.
I think the reason for the lack of popularity is much simpler: Writing functional code is often backwards and can feel more like solving puzzles than like explaining a process to the computer. In functional languages I often know what I want to say, but it feels like I have to solve a puzzle in order to express it to the language. Functional programming is just a bit too weird.
To talk about functional programming let’s bake a cake. Taking a recipe from here, this is how you bake an imperative cake:
- Preheat oven to 175 degrees C. Grease and flour 2 – 8 inch round pans. In a small bowl, whisk together flour, baking soda and salt; set aside.
- In a large bowl, cream butter, white sugar and brown sugar until light and fluffy. Beat in eggs, one at a time. Mix in the bananas. Add flour mixture alternately with the buttermilk to the creamed mixture. Stir in chopped walnuts. Pour batter into the prepared pans.
- Bake in the preheated oven for 30 minutes. Remove from oven, and place on a damp tea towel to cool.
I’d take some issue with the numbering there (clearly every step is actually several steps) but let’s see how we bake a functional cake.
- A cake is a hot cake that has been cooled on a damp tea towel, where a hot cake is a prepared cake that has been baked in a preheated oven for 30 minutes.
- A preheated oven is an oven that has been heated to 175 degrees C.
- A prepared cake is batter that has been poured into prepared pans, where batter is mixture that has chopped walnuts stirred in. Where mixture is butter, white sugar and brown sugar that has been creamed in a large bowl until light and fluffy…
Ah screw it I can’t finish this. I don’t know how to translate the steps without mutable state. I can either lose the ordering or I can say “then mix in the bananas,” thus modifying the existing state. Anyone want to try finishing this translation in the comments? I’d be interested in a version that uses monads and one that doesn’t use monads.
Imperative languages have this huge benefit of having implicit state. Both humans and machines are really good at implicit state attached to time. When reading the cake recipe, you know that after finishing the first instruction the oven is preheated, the pans are greased and we have mixed a batter. This doesn’t have to be explicitly stated. We have the instructions and we know what the resulting state would be of performing the instructions. Nobody is confused by the imperative recipe. If I was able to actually finish writing the functional recipe and if I showed it to my mom, she would be very confused by it. (at least the version that doesn’t use monads would be very confusing. Maybe a version using monads wouldn’t be as confusing)
I’m writing this blog post because I ran into a related problem recently. C++ templates are accidentally a functional language. When this was realized that problem wasn’t fixed, but instead the C++ designers doubled down on functional templates which can make it terribly annoying to convert code to generic code. Here’s something I wrote recently for a parser: (I know it’s stupid to write your own parser, but the old tools like yacc or bison are bad, and when I tried to use boost spirit I ran into a few problems that took way too long to figure out, until eventually I decided to just write my own)
ParseResult<V> VParser::parse_impl(ParseState state) { ParseResult<A> a = a_parser.parse(state); if (ParseSuccess<A> * success = a.get_success()) return ParseSuccess<V>{{std::move(success->value)}, success->new_state}; ParseResult<B> b = b_parser.parse(state); if (ParseSuccess<B> * success = b.get_success()) return ParseSuccess<V>{{std::move(success->value)}, success->new_state}; ParseResult<C> c = c_parser.parse(state); if (ParseSuccess<C> * success = c.get_success()) return ParseSuccess<V>{{std::move(success->value)}, success->new_state}; ParseResult<D> d = d_parser.parse(state); if (ParseSuccess<D> * success = d.get_success()) return ParseSuccess<V>{{std::move(success->value)}, success->new_state}; return select_parse_error(*a.get_error(), *b.get_error(), *c.get_error(), *d.get_error()); }
This function parses a variant type called “V” by trying to parse the types A, B, C and D. They have better names in the real code but those names are not important. There is some obvious repetition here: This calls exactly the same code for four different parsers. C++ doesn’t really support the monad pattern, but I could make this reusable by writing a loop that iterates over all four, trying them in order:
template<typename Variant, typename... Types> ParseResult<Variant> parse_variant(ParseState state, Parser<Types> &... parsers) { boost::optional<ParseError> error; template<typename T> for (Parser<T> & parser : parsers) { ParseResult<T> result = parser.parse(state); if (ParseSuccess<T> * success = result.get_success()) return ParseSuccess<Variant>{{std::move(success->value)}, success->new_state}; else error = select_parse_error(error, *result.get_error()); } return *error; } ParseResult<V> VParser::parse_impl(ParseState state) { return parse_variant<V>(state, a_parser, b_parser, c_parser, d_parser); }
There is some overhead here because I have to select one of the error messages to return, but overall this is a pretty straight forward transition to do. Except you can’t do this in C++. As soon as templates are involved you have to think more functional. Here is my solution:
template<typename Variant, typename First> ParseResult<Variant> parse_variant(ParseState state, Parser<First> & first_parser) { ParseResult<First> result = first_parser.parse(state); if (ParseSuccess<First> * success = result.get_success()) return ParseSuccess<Variant>{{std::move(success->value)}, success->new_state}; else return *result.get_error(); } template<typename Variant, typename First, typename... More> ParseResult<Variant> parse_variant(ParseState state, Parser<First> & first_parser, Parser<More> &... more_parsers) { ParseResult<First> result = first_parser.parse(state); if (ParseSuccess<First> * success = result.get_success()) return ParseSuccess<Variant>{{std::move(success->value)}, success->new_state}; else { ParseResult<Variant> more_result = parse_variant<Variant>(state, more_parsers...); if (ParseSuccess<Variant> * more_success = more_result.get_success()) return std::move(*more_success); else return select_parse_error(*result.get_error(), *more_result.get_error()); } } ParseResult<V> VParser::parse_impl(ParseState state) { return parse_variant<V>(state, a_parser, b_parser, c_parser, d_parser); }
I am actually very happy with this. Sure it’s harder to read because the iteration is hidden in a recursion, but you should have seen what I had before I came across this solution: I had a struct with a std::tuple<std::reference_wrapper<Parser<T>>…> member. If you’ve ever worked with a variadic sized std::tuple, you know that that alone turns any code into a puzzle.
In any case the point is this: I had some straight imperative code that was doing the same thing several times. In order to make it generic I couldn’t just introduce a loop around the repeated code, but I had to completely change the control flow. There is too much puzzle solving here. In fact I didn’t solve this the first time I tried. In my first attempt I ended up with something far too complicated and then just left the code in the original form. Only after coming back to the problem a few days later did I come up with the simple solution above. Making code generic shouldn’t be this complicated. The work here is not trying to figure out what to do, but it’s trying to figure out how to satisfy a system.
I get that too often in functional languages. I know C++ templates are a bad functional language, but even in good functional languages I spend too much time trying to figure out how to say things as opposed to figuring out what to say.
Now all that being said do I think that functional programming is a bad thing? Not at all! The benefits of functional programming are real and valuable. Everyone should learn at least one functional programming language and try to apply what they learned in other languages. But if functional programming languages want to become popular, they have to be less about puzzle solving.
Actually, it’s very easy to not to use recursion at all when using variadic templates practically always. Just write a function that will do a for loop on types for you:
template
void for_each_type (Func func, Types &&…types)
{
std::initializer_list { (func (std::forward (types)), 0)…};
}
Using it your code becomes something like this:
boost::optional error;
boost::optional<ParseSuccess> success_result;
for_each_type ([] (auto &parser)
{
if (success_result) return;
ParseResult result = parser.parse(state);
if (ParseSuccess * success = result.get_success())
success_result = ParseSuccess{{std::move(success->value)}, success->new_state};
else
error = select_parse_error(error, *result.get_error());
}, types…);
return success_result.value_or (*error);
It’s still not ideal due to inability to iterrupt this loop, but it’s solvable with some more metaprogramming.
Thanks, that’s a neat trick. I had seen it before but never tried it. Trying it I ended up with this:
Which is… well it’s too much of a puzzle for my taste. People reading this are going to have trouble.
1. Since I can’t early out of the loop, I have to have a state machine that keeps track of whether I’ve had a success in the past, and if yes, early out of all future iterations.
2. The initializer_list trick relies on the knowledge that order of evaluation of arguments is well defined for initializer_list while it’s not well defined for normal function arguments.
3. The initializer_list trick relies on the comma operator.
4. The initializer_list trick builds up a temporary data structure only to satisfy a weird language design. The compiler should make this go away at compile time, but I don’t know enough about the language to be 100% certain about that, and I think most people won’t be.
5. I lost a bit of type information, having to use “auto” in two places where I had the full type before.
I would never use this at work. For toy projects it may be fine, but if there is a chance that somebody else has to maintain my code, this takes way too much explanation.
So yes, this solves the problem of having to use recursion and not being able to use iteration. It does not solve the problem of having to solve a puzzle just to repeat a small chunk of codes a couple times.
Functional programming is based on function application. Your recipe looks more like logic programming.
That could totally be. It’s difficult to translate programming styles to cake recipes 😉
Want to give it a shot?
I mainly tried to focus on never modifying existing state. (which is something that logic programming and pure functional programming have in common, which is why I might have ended up in the wrong paradigm)
cake = cooled(removed_from_oven(added_to_oven(30min, poured( greased(floured(pan)), stirred(chopped(walnuts), alternating_mixed(buttermilk, whisked(flour, baking soda, salt), mixed(bananas, beat_mixed(eggs, creamed_until(fluffy, butter, white sugar, brown sugar)))))))
🙂
That’s great. I still can’t put it into english sentences, but now I can see that putting it into a more mathematical form makes the problem easier to express in a functional style.
Oh btw you forgot to preheat the oven. Here is an attempt at a fix:
cake = cooled(removed_from_oven(added_to(30min, poured( greased(floured(pan)), stirred(chopped(walnuts), alternating_mixed(buttermilk, whisked(flour, baking soda, salt), mixed(bananas, beat_mixed(eggs, creamed_until(fluffy, butter, white sugar, brown sugar)))), preheated(175C, oven))))
We’re also losing the information about which order to do some of the steps in. The reason for that is that some of these do not have an inherent order. That’s a feature of functional programming that steps which do not depend on each other do not get an arbitrary order assigned to them like they get in imperative code. We could do the stirring and the preheating in any order. However the recipe does list things in a certain order because doing the steps in that order makes the most sense. For example preheating is listed first, because it takes a while so the oven should preheat while you do the other steps. That information is lost in functional code, (intentionally) which makes the recipe slightly harder to follow.
Related to that is the problem that when following these instructions, we have to manually keep track of how far along we are in each step. In this recipe, I have to basically keep a list of steps that I have already done and how far along I am in each substep. In the imperative recipe that is done for me automatically because I just have to remember which sentence I am on. That sentence uniquely determines which steps and substeps I have already done.
The preheat step and most of the orders of operation are really just “optimizations”, time-saving measures that are not really intrinsic to the operation. You need a hot oven to do the cooking. Maybe you need to heat it up beforehand, maybe you already have a hot oven ready, maybe you have a quick-heat oven that can instantly warm up. A functional programmer might say that’s a problem for the “compiler” (the cook) to reckon with.
I’m on your side; functional programming can have some real issues with expressivity. Functional is a very strict set of rules that not everything fits into very well. Seemed like a fun exercise though.
Malte – Excellent article. I try to at least know newer programming paradymes, but functional has left me cold. Maybe you have explained why.
Brett- This form completely reminds me of very very early UNIX c style only it was done then with braces, not parens. I wonder if there is a relation… Over my head.
Good luck trying to convince a layperson to bake that cake of yours.
Of course a Haskell/ML style could be written more like (neither of which I’ve actually programmed in):
cake = bake(cake_mixture, 30min, prepare(pan, (grease, flour))
where cake_mixture =
creamed :until_fluffy ‘butter’ ‘white’ ‘sugar’ ‘brown sugar’
|> beat_mixed_with ‘eggs’
|> mixed_with ‘bananas’
|> mixed_with :alternating ‘buttermilk’ ‘dry_goods’
where dry_goods = whisked ‘flour’ ‘baking soda’ ‘salt’
Agree with the general point you’re making though.
Thanks, that’s a good one. The pipe forward operator (not sure what its real name is) allows you to write the code forwards. So this suffers less from the “functional code is often backwards” problem that I talked about at the beginning. It’s still there though.
Can you try writing it so that none of the steps are backwards?
You also forgot to preheat the oven and to chop and stir in the walnuts. I don’t know precisely how the pipe forward operator works, but here’s an attempted fix:
cake = bake(cake_mixture, 30min, prepare(pan, (grease, flour)), preheated(175C, oven))
where cake_mixture =
creamed :until_fluffy ‘butter’ ‘white’ ‘sugar’ ‘brown sugar’
|> beat_mixed_with ‘eggs’
|> mixed_with ‘bananas’
|> mixed_with :alternating ‘buttermilk’ ‘dry_goods’
|> mixed_with chopped ‘walnuts’
where dry_goods = whisked ‘flour’ ‘baking soda’ ‘salt’
I’m not sure if the “where” clause has to stay at the end like I did it, or if it should come before the last step that I inserted.
It’s sometimes annoying how often when people have critisims about “functional programming” they really translate to just critisisms about Haskell. I too find Haskell’s backward “where” clause annoying, but it’s perfectly possible to functional programming without such a syntactic construct, and indeed languages like OCaml don’t even have such a construct.
See below for baking a cake in a functional language without any “backward” steps:
let cake =
let dry_goods = whisked [flour; baking_soda; salt] in
let cake_mixture =
creamed ~until:`Fluffy [butter; white; sugar; brown sugar]
|> beat_mixed ~with_:eggs
|> mixed ~with_:bananas
|> mixed ~with_:(alternating [buttermilk; dry_goods])
|> mixed ~with_:(chopped walnuts) in
let oven = preheated ~at:175C oven in
let pan = prepare pan ~with_:[grease; flour] in
bake ~pan ~oven ~min:30 cake_mixture
|> remove_from_oven
|> cooled
*cough* `do` notation in Haskell *cough*
Really, though, most problems meld more easily to FP than you’d first think. I mean, it definitely takes some practice, but it’s easier than it seems, and this is coming from a person who went “straight off the deep end”, learning Haskell after only coding procedural/OO Python and C++ (no templates).
A simple case like this may not be where FP shines. I’d say if procedural programming is for the use cases of the kitchen then FP is for the problem domains of the engineers: it shines where you get confronted with high complexity and need to be able to model it effectively (= understandable by colleagues). Reminds me on how hard a sell relativity theory was to the Newtonians. Having never dealt with high energy physics they didn’t see the need for the added complexity of Einsteins theories. Consider this not too complex example: the code to generate all permutations of a given list. The Java implementation takes about 30 lines of code in its most readable form, though I still find it hard to read. The Erlang code is a oneliner and very understandable. Above that it taught me a new way of looking at permutations.
this comment was edited by Malte Skarupke to be slightly less offensive. I was getting a chain of responses to this comment because people were offended by it. I want to keep the discussion civil, so I decided to edit this comment and to delete all the responses who were just taking offense
Since people are writing solutions in pseudo code in functional languages, I figured I should also write a solution in an imperative language to have a direct comparison. Here is C++:
The translation is very straight-forward: You literally just translate each line into pseudo code and you’re done. I only introduced one named variable that is not in the original: “Mixture mixture;”. That one is so important that it is implicit in the original recipe. In C++ the only way to have an implicit variable is to make it the “this” pointer, which I think would have made this code more confusing.
The point is that so far the functional solutions had to make bigger changes. They had to change the order to satisfy the syntax or they had to introduce more variables that were not in the original. And my claim is that these changes are the result of having to solve a puzzle while programming, which is “how do I express this problem in this system?”
Also none of the functional solutions so far have been complete. They forgot at least one step. I blame that on the translation being tricky. If you can’t do a straight forward translation, it’s easy to forget a step, especially when dealing with such a silly problem as this that you’re not taking seriously anyway. In C++ it was easy to include all the steps, so I did it even though this is a silly problem.
So what are the downsides of the C++ code? There are many. For example I’m blocking in two places where the thread will just sleep. The addition of coroutines and the await keyword can solve that.
Another downside is that at the end where I’m removing the pans from the oven there is an implicit contract there where I have to place the first pan before I can take the second pan. That is not stated anywhere. If a robot ran this code, maybe it would assert if I try to pick up the second pan while still holding the first pan, maybe it would just drop the first pan. This problem did not exist in the original recipe, and it does not exist in functional solutions. The problem appears to be that in C++ I just have too many details in my solution. This is an easy mistake to make in C++, to accidentally add ordering to the code that is not necessary, and that then later causes problems when you make changes.
And finally look at all that mutable state. I don’t write C++ code like this any more. I try to use more pass-by-value with move semantics, less pass-by-reference, and be more functional in general. But if all I’m doing is translating the recipe directly, there is unfortunately a lot of mutable state. The reason for that is I think that recipes tend to be fairly simple, so recipe writers never had to deal with the problems that programmers have to deal with when they have too much mutable state.
Here is a stab at a similar pseudo-Haskell implementation:
bakedCake :: Cake
bakedCake = preheated_oven `par` cake where
-- step 1
-- preheat the oven
preheaded_oven = preheat 175C oven
-- grease and flour 2 - 8 inch round pans
floured_pans = map (flour . grease) pans
-- whisk together flour, baking soda and salt
flour_mixture = whisk (flour baking_sode salt)
-- step 2
-- cream butter, white sugar and brown sugar until light and fluffy
mixture = iterate cream (butter white_sugar brown_sugar)
& fromJust . find is_light_and_fluffy
-- beat in eggs, one at a time
& flip (foldr beat_in) eggs
-- mix in bananas
& mix_in bananas
-- add flour mixture alternatively with the buttermilk
& getEndo (foldMap (Endo . add) (alternate flour_mixture buttermilk))
-- stir in chopped walnuts
& stir_in (chop walnuts)
-- pour batter into the prepared pans
(first_half:second_half) = take_half mixture
filled_pans = zipWith pour floured_pans [first_half, second_half]
-- step 3
-- bake in the preheated oven for 30 minutes
hot_pans = bake_for 30minutes preheated_oven filled_pans
damp_towel = dampen tea_towel
cake = hot_pans `placedOn` damp_towel
Of course, one could duplicate the original C++ code almost as-is using do notation and the IO monad, but I chose to adhere to a more recognizably functional style. The assumption was made that ingredients form a trivial Mixture and implement the Monoid interface for Mixture::add. As in the C++ version, raw ingredients and utensils are assumed to be defined elsewhere. The & operator (flipped function application, similar to |> in F# and other languages), is imported from Data.Function. Collections are treated as lists for simplicity. Preheating in parallel is represented by the `par` operator from Control.Parallel, which begins the evaluation of its first (left) argument in the background while returning the second (right) argument.
I think the robot would really have a lot of trouble the recipe program.
You only have one tea_towel and use it on both cakes. I understand that the recipe only said one towel.
tea_towel.dampen();
oven.remove(pan1);
place_on(pan1, tea_towel);
oven.remove(pan2);
place_on(pan2, tea_towel);
But the real error is probably this the while loop you assume that you need to stop when the flour_mixture or the butter milk is empty. So you have to be really lucky to have both the flower and the butter empty within the loop.
!flour_mixture.empty() && !buttermilk.empty() == !(flour_mixture.empty() || buttermilk.empty())
While it should be both empty.
while you actually want !(flour_mixture.empty() && buttermilk.empty())
But you are right the real problem with imperative programming is that is that you have to add a lot of details to the code. which a programmer shouldn’t really care about except when he really needs the best possible performance.
for (Egg & egg : eggs)
{
mixture.beat_in(eggs);
}
Most people expect this to just say for each egg beat it in the mixture.
But what it really says is:
– take an egg by reference from the eggs container.
– from object mixture get the address of its beat in function,
– then pass the egg to the member function. (the program doesn’t state if it is by reference or a move operation),
– repeat for all other eggs.
It is also impossible to know what happened to the egg after passing it to the function. (are they deleted in the function?).
I also think references and pointers make mutable state even worse. Because they allow you to change any variable at any spot in the program making your code very interdependent. OOP makes it even worse that an object can hide this pointer as a member and modify variables when you call one if its functions.
Maybe it would be cool to have a concept language where all pointers/references are unique thus cannot be copied only moved. (something like RUST i think)
while in a functional programming language you would state this as
mixture_with_eggs=foldl( beatIn mixture , eggs)
This is an interesting example to unpack.
Your cake recipe is incredibly linear: for most steps, the output of the previous step is required for the next step to be possible: the only things that could be performed out-of-order are the two halves of step one. Thus, it maps very conveniently into an imperative representation. It’s the recipe equivalent of a shell script.
What happens if, rather than baking a cake (shell script), you’re cooking a big family dinner (program)?
Now you have probably half a dozen recipes with overlapping ingredients, that contend for limited resources like preparation time, stove space and oven space. You probably have someone else to help you cook, but you’ll both also be required to spend some time socializing and serving wine. You also have a much higher possibility that something will go wrong (say, your in-laws arriving at exactly the wrong moment) causing you to have to re-calculate your recipe on the fly.
What do we do in these circumstances? We pull each recipe apart into its component pieces. Before we start, the turkey needs to be brined, that takes several hours so we have to do that the night before. Also, several of the recipes have intermediate stages that can be refrigerated, so those can be prepared in advance if necessary. In order to cook the roast potatoes, we need a turkey in the oven with about an hour remaining. The water for the vegetables needs to start being boiled a few minutes before the turkey comes out of the oven… and so on.
Almost without trying, our recipe for dinner is starting to resemble your functional/logical version a lot more than it resembles a sequence of steps. And it _has_ to, otherwise we’d end up with our turkey being served thirty minutes before the vegetables.
This whole exercise made me think in the same direction. I was thinking that if you were making lots of food, maybe in an assembly line, your solution description might look more and more like functional code.
So then the question becomes which path is easier: Start from the linear code like in C++ and then every time that you run into problems with scaling, refactor the code to be less linear; or define separate steps from the beginning so that you’ll have an easier time when it comes to scaling the code up. The current conventional wisdom seems to be that functional code has many benefits here in terms of managing complexity. But that’s not really what my blog post was about. My blog post was about the programmer who starts on her simple idea and wants to explain it to the computer, and she’s having a much easier time doing that in imperative code than in functional code.
The other thing is that really we are talking about buildsystems here. There is a reason why many buildsystems invent their own programming language to describe the build steps. If makefiles used C++ to describe the build steps, I don’t think that would be a good fit. However the tools that do each build step (compiler, linker) tends to be written in C++. So for the coordination we use a functional language, but for writing single steps we use imperative code. I’m honestly not sure what to take from that. Could just be a historical accident or there could be more fundamental reasons for that.
You’ve attempted to describe a very linear process using a very linear paradigm (Imperative). As shown by others, you can use “currying” to write this process in a way that was very similar to your imperative list.
Maybe the “weird” argument is actually a problem of training. If you grew up on LISP and Haskell, the Imperative may seem equally weird.
The Imperative version also has a lot of hidden traps. Your imperative code is actually filled with fake “requirements”. For example, take the following lines:
> Mixture flour_mixture;
> flour_mixture.add(flour);
> flour_mixture.add(baking_soda);
> flour_mixture.add(salt);
> flour_mixture.whisk();
Does the flour really need to come first? The order should not really matter, but your code is pretty adamant that the flour comes first. Likewise your code also specifies that Pan1 must be oiled before Pan2. Again, that’s not strictly a requirement, you’re just putting it there. You’re also potentially greasing pans that have already been greased because you’re making assumptions about the starting state of every object in the system. You’re actually kind of assuming that you get a new Pan every time. How very immutable of you.
As written your code is not parallelizable, even though all of the readers can clearly see ways in which this can be parallelized. You can’t add more people to the process. Isn’t that a little weird?
At the end of the day, I understand the “weirdness”, but I’m not sure that it’s truly “weird” instead of just “unpracticed”.
Your article does bring up some great discussion points here about the functional and imperative concepts.
FWIW, functional programming can be really complicated to learn. But when you learn the path with it, you’ll learn the art of simplification. How might I bake a cake in say Scala?
val bakedCake = for {
oven <- preheatOven(),
uncookedCake <- mixIngredients(),
cake <- bakeCake(oven, uncookedCake)
} yield (cake)
bakedCake match
case Some(cake): eat(cake)
case None: println("cake is raw!")
With the rough cut I coded above, you’ll see that your cake example can be rather gorgeous. It is a practice of form meets function here.
A very interesting discussion – especially the different ‘solutions’ to the recipe. This whole article illustrates perfectly why I gave up writing C++, having used it since Version 1 for many many projects.
I decided that when it became more difficult to work out how to express a solution to a problem in the language than it was to solve the problem in the first place, that either C++ had lost its way, or that I had.
Currently I’m using C, Python, Java, PHP, SQL, Javascript, VBA for various client projects. Just starting to look into Kotlin. I try to keep up with what’s going on in C++, but as I never need it these days, I’m slipping behind!
” I can either lose the ordering or I can say “then mix in the bananas,” thus modifying the existing state. ”
So you’re incapable of saying “with three bananas mixed in”? I didn’t read beyond that as it’s clear that there’s nothing worthwhile here about functional programming.
Interestingly, older recipes tend to be much more declarative than newer recipes. Compare e.g. the cake recipes in http://www.gutenberg.org/files/17438/17438-h/17438-h.htm#page209 and http://www.gutenberg.org/files/26323/26323-h/26323-h.htm#Page_120 with the recipe above, and you’ll see that they are somewhere in between a fully imperative and a functional style. A more modern example of the in-between style can be found in ‘Modernist Cuisine’, with a sample recipe on http://modernistcuisine.com/recipes/caramelized-pumpkin-pie/ . It even has reasonably explicit imports… 🙂
That’s right! You’ve got it exactly! I have a feeling, deep down inside, that a functional language would be really great for writing a recursive descent parser… but every time I start doing it, I run into something for which I just can’t be bothered to do the mental gymnastics…
…. Hmmm…. I wonder what a person would be like who’d only ever done functional programming: they’d have to live their entire lives knowing ahead of time how everything’s going to turn out… which would be a real pisser if they ended up dying of cancer or something…
I remain convinced that the biggest stumbling block to learn Functional Programming is if the ‘student’ is already familiar with Imperative Programming.
Years ago I assisted classes in FP to (largely) people who hadn’t programmed in any language. To my surprise they picked it up very quickly. A lot quicker than me & my peers, a few years before that, all familiar with ‘normal’ programming.
The main distinction, IMHO? We had to (some degree) ‘unlearn’ imperative in order to understand functional. Eventualy you can switch at will between the two styles, but it’s much harder than learning FP from scratch.
I think that’s why it feels weird & cumbersome to so many people… they’re mostly programmers already familiar with imperative. It just goes to show; everything you learn shapes your brain in a certain way.
That is very interesting. What was the context of this? Did the students go on to do more advanced work in functional languages? For example would you expect them to be able to write video games in a functional language? Where a video game is an exercise where I find no end of puzzles about “how do I do this thing in a functional style?” (or see also http://prog21.dadgum.com/23.html)
Do you think those students would not perceive those problems as puzzles and would have an easy time implementing them? Because my immediate thought is that the students would probably quickly run into some of those puzzles that functional languages inevitably put up, and would then conclude that programming just isn’t for them.
The context was an entry level course at University (first or second year).
CS students encountered the functional course at the end of the year (or even the second year, it was over a decade ago), after they’d had ‘d the imperative course at the beginning of the (first) year.
For (most) students of ‘Cognetive Artifical Intelligence’ (a minor of Psychology or some such), functional programming was the first programming course they’d encounter.
I was a CS student as well, but in later years there was an option to make some money on the side as a ‘student assistant’, assisting the lecturer/dr. with comp-room excercises and even grading (pending final approval of the lecturer) some practical excercises.
I’ve assisted both ‘versions’ of the class (plus my own experiences when I first took the class). I expected the non-CS students to need a lot of help, but to my surprise they often performed better, especially initialy, despite being completely new to programming.
It might’ve helped that this was Haskell (which has a lot of nice syntatctic sugar, later-on we even had a ‘home-made’ variant with nicer error-messages, specialy made for education), and of course the assignments where geared more towards the FP-end of the spectrum.
But that doesn’t erase the (granted; largely anectodal at this point) observed difference between the two groups.
P.S. To the commenter above my first comment: FP (especially lazily evaluated) is very good for parsing stuff. To the point I’d say that that it’s the main strong-point of the paradigm, even. Look up a concept called ‘parser combinators’ if you want to know more. (Boost Spirit also works with parser combinators.)
FWIW my anecdotal evidence is the same.
My father is a civil engineer who learned to program on AutoLISP (that’s LISP for autocad). It all seemed perfectly reasonable to him.
Many people learn their first elements of programming via Excel. The core built-in functions of Excel are a functional language. Just look at the IF() syntax. When I show people the IF() syntax in Excel, they “get it”.
My University had a 3rd year course on “non-Imperative” programming languages. It started with LISP and worked through several others, like J/FP & Prolog. After just 4 months of this class, the scores on the LISP questions were the highest despite those exam questions involving some features like metaprogramming.
Consider the number of people who drop out of first year programming because they “don’t get” pointers or recursion or parallelism in the imperative languages. To me, it’s honestly time to start thinking about starting with a functional language and drilling into the more complex things like Imperative later on. Some universities do this: https://www.quora.com/Which-universities-teach-their-first-CS-course-in-a-functional-programming-language
I understand your point, but I strongly believe that your problem is not at all with the paradigm itself, but your own professional history: Most programmers learn an imperative language as their first one, and they struggle with rewiring their brain to think in FP concepts. Once that transition is made, the problem actually turns around. It seems that it’s impossible to be fluent in both paradigms.
As a professional Haskell programmer originally coming from C++ I went through this myself. Now I have the exact opposite problem: Expressing myself in the imperative paradigm is a genuinely difficult and frustrating experience for me. It would take another few months of forcing myself to use C++ and never touch Haskell again to become fluent in the imperative language and idioms again.
This is also my experience as a Haskell teacher: People without a programming background find it very natural to express themselves in Haskell. If I were to expose them to an imperative language like C++ first, they would find it equally natural to express themselves in that one. The first language you learn becomes your mother tongue, and it’s a long and frustrating process to change it.
So why did I choose to go through this process? Well, once you are fluent in FP and especially Haskell, your productivity takes a huge leap – at least mine did. The reason is that engineering is (in my personal experience of course) much simpler and the resulting programs have a locality property that allows rich reasoning and almost frictionless refactoring. I can respond to new market conditions very quickly because of that, and I never fear that I’ve broken something along the way due to time pressure.
I’m trying to learn Haskell right now, but at the moment i am programming in C++ and i actually already mis a lot of power you get in Haskell. For example its really annoying that i may not use higher order functions and closures in my work environment.
I actually prefer a language in the middle of fp and oop, OOP is usefull for interfacing with hardware and work with a real object, but it fails when you go too abstract. For example my program doesn’t really care if it uses a uart port, a file or a ethernet network. Those are all streams and can be opened and close, read from and written to.
But for other things it just fails the famous shape example works for simple shapes like circles, rectangle etc. But where would a table or a button fit in, its also a shape…
So for the shape example i would prefer to use a close, then a draw function would just go over a list of draw closures and execute them instead of using shapes or “drawables”.
I guess i would prefer to have both, i would use objects for IO, while all calculations, control flow and manipulations would be done in an FP way.
I have also programmed in C before and sometimes used callback functions, but C and C++ just have an obscure syntax for all these things.
As a practical matter the top 3 languages are Java, C, and C++. These are the only three that score above 5% of market share according to http://www.tiobe.com/tiobe_index. Java is way out in front, C lags behind but is still strong, C++ is next.
I know C very well, and had the misfortune to spend some time on C++. I never needed to get involved in Java but I’m keeping that door open. But I don’t like any language that enforces the OOP paradigm. I do everything now, including CGI programming, in C, Perl, or LInux shell. I never do Windows programming now, thank goodness. I’m strictly a LInux guy. So the only issue for me is not which languages to use, but how to organize large projects in C, with perhaps some Perl and shell commands for parts of the project. In any event, the only language I see any use for as a freelance programmer other than C and Perl is Java. Other languages I’ve used a bit are Fortran (high school), Basic, Visual Basic, LISP, and Prolog. I’ve also done assembly language for a living. But C is king as far as I’m concerned, next is shell (awk, sed, etc.) and Perl is nice too. Of course if the next great thing turns out to be really very useful, people will find out and jump all over it. It will be interesting to see what the big thing is in five years.
The title of your post should be Declarative Programming Is Not Popular Because It Is Weird. Functional programming is not opposed to imperative programming. Functional languages have excellent support of imperative programming.
First problem is that to model a recipe to a program is actually not easy and maybe even a bad example. So even oop wouldn’t do that good. ofcourse it could write a conversion function that accepts a batter and returns a cake but a functional program could do the same.
Also a recipe doesn’t really use any kind of higher order functions.
Because for example what does the bake function do?
Does it calculate the conversion of ingredients like sugar to caramel?
The expansion of baking soda to CO2 and its oxidation on the mixture?
As a functional programmer you look at it in an abstracter way. For example if you bake in an oven or microwave or on a stove top doesn’t matter to you (they might be a higher order function which have slightly different conversion factors) .
Nor does it matter in what you mix the ingredients. It will probably be a list.
A batter would be a List of Ingredients. The ingredients also include air for the fluffiness.
So you start individual ingredients and end up with a list which is a batter.
To introduce the air a function would calculate the maximum absorption of the batter and add it to the mixture. The same function could be used when whisking the eggs. You probably pass a higher order function to it which is the way you use to introduce the air. So using a Whipping Siphons would result in more air in the mixture than using a normal whisk.
The bake function would convert certain ingredients in the list to something else ether a new list of converted ingredients or a Cake which is a tuple of a temperature, volume and taste etc.
IMHO you nailed it. Humans naturally think in imperative way. Computers also execute code in imperative way – they execute instructions in order and operate on state (in memory). Functional programming paradigm is still useful in certain cases, but it is not an ultimate solution for all kinds of problems as some people would like to think.
You mean: von Neumann architectures use instructions. Surely not all possible definitions of computers have an imperative nature. There isn’t something inherent imperative about computers. But it is very natural for humans to manifest their will using imperatives, and programming is very much about imbuing your will into the nature around you. But it’s not the only way to think.
I’m soooo late to the discussion, but I have to note here that Tensorflow (Theano, Keras, whatever) are actually functional programming languages, “embedded” into imperative.
And I feel that this is actually the most natural place for them to dwell in. We “think” in an imperative way, but we “learn” in a functional way.
Google must be liking this article :-). F# uses computation expressions and was the first to create the “async/await” pattern that all other languages are starting to use. You can use computation expressions to make all other functional-style designs into more imperative designs. Like the monad:
This is a little more complex rendition where you are taking into account asynchronous code and an Ok/Result response. I would say it is actually more understandable than imperative code itself since imperative code would have a bunch of
if (fluffyMixture == null) return Error...`
statements making it quite messy.But alas. Even with this amazing F# code with units and everything it is still not popular. I think part of the problem is that people attracted to functional languages don’t understand how to explain it very well and enjoy the arcane.
Here’s the problem with it: it’s fine for your cake – but if you wanted to make an entire meal, you’d start having trouble: because everything is a function of everything you’ve already defined, you have to be able to hold the whole thing in your head before you even start. To me (someone who’s been programming for 40 years) functional programming is a toy: anything much more complex than a cake becomes an acrobatic feat.
I can see how you would struggle, its a different way of thinking and attacking the problem you would like to solve. However, if you where to give Elixir a chance in 2019, you would be pleasantly surprised after the first week of pain.
Thank you!
It’s the most realistic example I ever seen
Considering we need to translate the recipe into the real world, probably the best way would be to think about the essentials. What are the main characteristics of functional programming and what action or manifestation of physical events does it resemble?
From what I understand, one of the main factors in functional programming is immutability. But immutability does not exist in the real world. What we could do is look for workarounds such as what we do when we want to practice FP in non-functional languages.
Let’s imagine we have the ingredients for the cake, but we want to keep its state for some reason. Our first function would be to “buy the ingredients.” If we think about it, functional programming does not transform the data it receives, but it does have to create other data, with the difference that it does not store it. The function (and excuse me a bit, but I’m going to have to write it in JS):
const add = (a, b) => a + b;
add(5 + 3);
// 8
That a + b at some point had to be 3 + 5 to be able to deliver the result. That calculation is being done literally somewhere in the computer, even if it is many layers below.
If we make our first function to buy those ingredients and then, through more functions, change their state, the idea that the function receives an input and outputs an output could be fulfilled, without having to modify the state of the initial ingredient.
When translating programming elements into the real world we have to consider that the computer is doing other things that we do not control or see and therefore some actions of preparing a recipe will be covered by those operations.
Do you think what I’m saying makes sense? Or maybe I still don’t quite understand what functional programming is all about? Any comments are appreciated.
I tend to agree. I don’t think I’ve seen anything larger than small demonstrations in functional programming which doesn’t break the rules somehow: in F#, for example, you can do all the functional programming stuff, and the immutability… but somewhere in there you find code which is basically weird C#, doing what regular programming does, just to force the functional stuff to work. I think functional programming is an attempt to make reality conform to the language, rather than having a language which represents reality. It’s fine as a university thesis, but it’s not applicable to real problems.
Im reasoning a bit different about this topic. I see you see explicit state as being hard to model and therefore are in favour of implicit state. I think you should review this more on a case to case basis instead of making a general claim about this. The main reason to use explicit state is so your compiler can save you from making mistakes. By modeling states you will probally get a compilation error if you make a change later that is not going to work. Implicit state in that sense means that you take on the responsibility of making sure the whole code base will still work after every change since the compiler will not help you here. This is basically what “FP” tries to do, it tries to make sure the compiler catches most mistakes instead of relying fully on tests and code-reviews. That being said, every case is different, and you should always look at what is best for the situation, but do concider it is a trade-off, you don’t have to do any modelling, but you also wont get compiler protection. (Ofcourse, for trivial software most rules dont really apply.. since it is trivial)
I don’t think it’s any accident that people tend to learn imperative languages first and then maybe later try to ascend to functional coding. The world is based on causal sequences, people tend to think in causal sequences, and the first thing we learn about any process, even mathematics, is that it is at bottom sequential. We learn mathematics through arithmetic, and we learn arithmetic through counting. Once we have mastered the underlying causal sequence of a thing, we can then start to use abstractions like sets and variables. To me, functional programming is a type of abstraction which is difficult to master, but useful for certain types of problem. I don’t use set theory to count cash to pay the grocer, but I have used it to solve certain types of abstract problems that are hard to represent as a sequence. The point is that once you have represented the problem as a set of abstractions, you can easily translate the solution into sequences. Which is exactly what the compiler of a functional language does.
def Get_Cake_for_Recepie(Recepie) -> Cake:
All the cooking and preparing is dealt with in the lower-level functions.
Is this not functional or am I missing something?
Looks imperative to me. One instruction after another. Why do you think it’s functional?
You’re not missing anything. Yes, the “recipe” analogy is almost always functional.
Nobody (at least that I know) washes the dishes by first counting all the dishes, then creating a little box labelled ‘i’ into which the current dish count is maintained, and repeatedly inspected to determined if we have reached the dish count. A for-loop is extremely counter-intuitive (try to remember the day you internalised it in your mind).
There is a popular false dichotomy between “imperative” and “functional” programming that confuses many people (e.g. the author of this article). I did imperative functional programming this morning. I just happened to have used Haskell to do it; one of the most practical imperative languages available.
Truly bizarre to read this given that this sort of thing is uniquely suited to functional programming. This is in Javascript using Promises, which is the closest thing to functional task monads it has. See https://medium.com/swlh/what-the-fork-c250065df17d
Naive sequential function to illustrate the point:
Not only is this self-documenting, every single function is individually testable and reusable across other projects. They can be composed into separate functions to improve reuse…
And they can be trivially parallelised while maintaining testability and readability.
Now that we look at it, we can see that all cakes follow a similar pattern, so we can make a higher-order function…
And we can improve our individual functions by making them higher-order as well, allowing config to be passed in, all the while improving self-documentation…
Just swap Promises with Task monads and you’re golden.
View at Medium.com