A257687 Discard the most significant digit from factorial base representation of n, then convert back to decimal: a(n) = n - A257686(n).
0, 0, 0, 1, 0, 1, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0
Offset: 0
Examples
Factorial base representation (A007623) of 1 is "1", discarding the most significant digit leaves nothing, taken to be zero, thus a(1) = 0. Factorial base representation of 2 is "10", discarding the most significant digit leaves "0", thus a(2) = 0. Factorial base representation of 3 is "11", discarding the most significant digit leaves "1", thus a(3) = 1. Factorial base representation of 4 is "20", discarding the most significant digit leaves "0", thus a(4) = 0.
Links
- Antti Karttunen, Table of n, a(n) for n = 0..10080
Crossrefs
Programs
-
Mathematica
f[n_] := Block[{m = p = 1}, While[p*(m + 1) <= n, p = p*m; m++]; Mod[n, p]]; Array[f, 101, 0] (* Robert G. Wilson v, Jul 21 2015 *)
-
Python
from sympy import factorial as f def a007623(n, p=2): return n if n
-
Scheme
(define (A257687 n) (- n (A257686 n)))
Formula
a(n) = n - A257686(n).
Comments