- A+

I'm quite new to C# and I have a recursion issue to solve. I want to get the least number of coins possible in this coin change problem. I've adapted an algorithm to it but I need to return an object of type Change which will contain the minimum possible combination.

I've tried to implement an algorithm but I'm having all possible combinations.

`using System; using System.Collections.Generic; using System.Linq; // Do not modify change class Change { public long coin2 = 0; public long bill5 = 0; public long bill10 = 0; } class Solution { // Do not modify method signature public static Change OptimalChange(long s) { Change _change = new Change(); //To implement here return _change; } public static int GetCombinations(int total, int index, int[] list, List<int> cur) { if (total == 0) { foreach (var i in cur) { Console.Write(i + " "); } Console.WriteLine(); return 1; } if (index == list.Length) { return 0; } int ret = 0; for (; total >= 0; total -= list[index]) { ret += GetCombinations(total, index + 1, list, cur); cur.Add(list[index]); } for (int i = 0; i < cur.Count(); i++) { while (cur.Count > i && cur[i] == list[index]) { cur.RemoveAt(i); } } return ret; } } public class Program { public static void Main() { Console.WriteLine("Hello Change"); int s = 41; /*Change m = Solution.OptimalChange(s); Console.WriteLine("Coins(s) 2euros :" + m.coin2); Console.WriteLine("Bill(s) 5euors :" + m.bill5); Console.WriteLine("Bill(s) 10euors :" + m.bill10); long result = m.coin2*2 + m.bill5*5 + m.bill10*10; Console.WriteLine("/nChange given =", + result);*/ Solution sol = new Solution(); int[] l = { 2, 5, 10 }; Solution.GetCombinations(s, 0, l, new List<int>()); } } `

**Expected Result**

Example : The coins available are {2,5,10}

-- I have an input of 12 --

The program should return

Coins(s) 2euros : 1 Bill(s) 5euors : 0 Bill(s) 10euors : 1

-- I have an input of 17 --

The program should return

Coins(s) 2euros : 1 Bill(s) 5euors : 1 Bill(s) 10euors : 1

First off, understand what the basic idea of recursion is:

- If we can solve the problem without recursion, solve it and return the solution.
- If we cannot, then reduce the problem to one or more smaller problems, solve each smaller problem recursively, and combine the solutions to solve the larger problem.

Second, understand what the basic idea of dynamic programming is:

- Recursive solutions often re-compute the same problem many times; it is sometimes more efficient to store the solution rather than re-computing it.

All right, let's attack your problem.

`// Do not modify change class Change { public long coin2 = 0; public long bill5 = 0; public long bill10 = 0; } `

Pish tosh. This class is *terrible*. Fix it!

`sealed class Change { public long Two { get; } public long Five { get; } public long Ten { get; } public Change(long two, long five, long ten) { this.Two = two; this.Five = five; this.Ten = ten; } public Change AddTwo() => new Change(Two + 1, Five, Ten); public Change AddFive() => new Change(Two, Five + 1, Ten); public Change AddTen() => new Change(Two, Five, Ten + 1); public long Count => Two + Five + Ten; public long Total => Two * 2 + Five * 5 + Ten * 10; } `

**Start with a data structure that helps you, not one that hurts you**.

Now, let's write a recursive function:

`public static Change OptimalChange(long s) { // Are we in the base case? Solve the problem. // If not, break it down into a smaller problem. } `

What's the base case? There are actually two. **If the sum is negative then there is no solution**, and **if the sum is zero then there is a zero solution**.

`public static Change OptimalChange(long s) { if (s < 0) return null; if (s == 0) return new Change(0, 0, 0); `

All right, that's the easy part. What's the hard part? Well, if there is a solution then either it has a two, or it has a five, or it has a ten, right? Or maybe all three. So let's find out.

`public static Change OptimalChange(long s) { if (s < 0) return null; if (s == 0) return new Change(0, 0, 0); Change tryTen = OptimalChange(s - 10)?.AddTen(); ... `

Can you take it from here?

**See how much easier the problem gets when you have a data structure that implements the operations you need?** Again **always create a data structure which helps you**.

Next problem: **This algorithm is very inefficient**. Why? Because we are constantly re-doing problems we have already done. Suppose we are evaluating OptimalChange(22). That calls OptimalChange(12), which calls OptimalChange(7), which calls OptimalChange(5). But OptionalChange(12) also calls OptimalChange(10), which again calls OptimalChange(5). The answer hasn't changed, but we do the computation again.

So, what do we do? **We use a dynamic programming technique**. There are two ways to do it:

- Keep on being recursive, and memoize the recursive function.
- Create an array of change, and fill it in as you go.

But wait, there are more problems than the performance problem. We make the problem smaller by a maximum of 10 and a minimum of 2 every time, but the parameter is a long; it could be billions or trillions. If we have a recursive solution, we'll blow the stack, and if we have an array based solution, it's too large.

**We need to be more clever to solve this problem across the given range of possible inputs**. Think about it hard; *can we solve the problem analytically, without recursion, arrays or long-running loops*? Or, equivalently, *is there a way to reduce extremely large problems to small problems quickly*? The small problem can then be solved with dynamic programming.

As is always the case with homework problems **remember that you are bound by the rules of good scholarship**. If you use ideas from SO in your homework solutions, **you must give credit**. Not doing so is plagiarism and will get you expelled from a decent school if you keep it up.