It depends....It is the standard answer to these sort pof questions.
It is a difficult question to answer but it all depends on the application. I wouldnt bind all my data and index to caches because doing so would mean that you just have 10GB for everything else. In case you have lots of tempdb activity or say lots of sorting or hash join ( based on size i guess hash join will not be used too much but that again depends) than you might need more memory for tempdb. Similarly, if you do not have statement level cache enabled and you have lotas of adhoc queries then you will need to have large procedure cache as well.
If i were in your place, I would do following.
1. Assign a proper procedure cache size. Make sure that you have statement caching enable dto reduce this memory.
2. I will calculate memory needed for all my configuration + some extra.
3. I will keep remianing cache in default data cache ( but make sure that cache is partitioned to make sure that there are no spinlock issues).
4. I would keep enough cache for log
5. In your case chance of doing IO's are very less so I will keep the large io pool to as small as possible.
6. Similarly, try to keep APF percentage as low as possible.
7. Try to keep wash marker to as small as possible.
8. use relaxed lru strategy.
9. I am assuming the data is not compressed. You can compress it further to give more memory to tempdb and other areas which need memory
Then I will observe the performance of the application for a couple of weeks or a month and then I will try to fix any issues I will see in sp_sysmon report.
However, if you know how your application is using database system then you can have separate caches provided you are sure about your load properly.
I know it is very generic answer but correct answer will be base don application usage and then some trial and error.
Thanks