Difference-of-Convex functions programming and Difference-of-Convex functions algorithm play a key role in nonsmooth nonconvex analysis/programming. For 35 years from their birthday, these theoretical and algorithmic tools have been greatly enriched, thanks to a lot of their applications, by researchers and practitioners in the world, to model and successfully solve nonconvex programs from many fields of applied sciences. Despite these bright successes, how to further exploit the full power and creative freedom offered by these tools to more advantageously improve the state of the art (the standard) Difference-of-Convex functions algorithm to better handle large scale problems and big data is still a great challenge in this field. This paper addresses the three most important key issues to meet this challenge: finding tailored Difference of Convex functions decompositions, speeding up convergence, and reducing the computational burden of the standard Difference-of-Convex functions algorithm. Several advanced algorithms are proposed: the successive Difference-of-Convex decomposition algorithm in which Difference-of-Convex components are successively changed, its inexact version, and the accelerated versions via Nesterov’s acceleration technique of the two former approaches. Convergence properties of the proposed algorithms are carefully studied. We prove that, fortunately, in spite of these modifications, the proposed advanced algorithms have similar convergence properties to the standard algorithm. Moreover, we establish their convergence rates with Kurdyka-Lojasiewicz assumption. Finally, we demonstrate that various popular methods are special cases of the proposed algorithms.